values = line.split() embed = np.array(values[1:], dtype=np.float32)print('Loaded %s word vectors.' % len(embeddings_index))
# Embeddings for available wordsdata_embeddings = {key: value for ke
model returns all hidden-states.我想要创建一个具有相同架构和随机初始权重的新模型,但嵌入层除外:bert.embeddings.position_embeddings.weight (512, 768)
bert.embeddings.token_type_embedding