我尝试使用记号器方法对句子进行标记,然后将注意力掩码集合起来,以获得每个句子的向量。但是,当前的默认嵌入大小是768,我希望将其减少到200,但失败了。下面是我的密码。
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/bert-base-nli-mean-tokens')
model = AutoModel.from_pretrained('sentence-transformers/bert-base-nli-mean-tokens')
model.resize_token_embeddings(200)
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)错误:
2193 # Note [embedding_renorm set_grad_enabled]
2194 # XXX: equivalent to
2195 # with torch.no_grad():
2196 # torch.embedding_renorm_
2197 # remove once script supports set_grad_enabled
2198 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 2199 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
IndexError: index out of range in self我的预期产出是:
使用时:
print(len(sentence_embeddings[0]))
-> 200发布于 2022-09-30 20:44:19
我想你误解了resize_token_embeddings。根据文档的说法
如果new_num_tokens != >config.vocab_size,调整模型的输入令牌嵌入矩阵的大小。 如果模型类有>tie_weights()方法,则随后处理绑定权值嵌入。
意思是在从词汇表中添加/删除标记时使用它。这里,调整大小是指调整token->embedding字典的大小。
我想你要做的是改变hidden_size的伯特模型。为了做到这一点,您必须更改hidden_size In config.json,这将重新初始化所有的权重,并且您必须重新训练所有的东西,即计算上非常昂贵的。
我认为您最好的选择是在维度BertModel的(768x200)之上添加一个线性层,并对下游任务进行微调。
https://stackoverflow.com/questions/73713671
复制相似问题