我正试图为重整变压器实现一个分类头。分类头工作良好,但当我试图更改一个配置参数- config.axial_pos_shape,即模型的序列长度参数时,它会抛出一个错误;
reformer.embeddings.position_embeddings.weights.0:从检查点复制参数与形状torch.Size(512,1,64)的大小不匹配,当前模型中的形状是torch.Size(64,1,64)。reformer.embeddings.position_embeddings.weights.1:从检查点复制param与shape torch.Size(1,1024,192)的大小不匹配,当前模型中的形状是torch.Size(1,128,192)。
配置:
{
"architectures": [
"ReformerForSequenceClassification"
],
"attention_head_size": 64,
"attention_probs_dropout_prob": 0.1,
"attn_layers": [
"local",
"lsh",
"local",
"lsh",
"local",
"lsh"
],
"axial_norm_std": 1.0,
"axial_pos_embds": true,
"axial_pos_embds_dim": [
64,
192
],
"axial_pos_shape": [
64,
256
],
"chunk_size_feed_forward": 0,
"chunk_size_lm_head": 0,
"eos_token_id": 2,
"feed_forward_size": 512,
"hash_seed": null,
"hidden_act": "relu",
"hidden_dropout_prob": 0.05,
"hidden_size": 256,
"initializer_range": 0.02,
"intermediate_size": 3072,
"is_decoder": true,
"layer_norm_eps": 1e-12,
"local_attention_probs_dropout_prob": 0.05,
"local_attn_chunk_length": 64,
"local_num_chunks_after": 0,
"local_num_chunks_before": 1,
"lsh_attention_probs_dropout_prob": 0.0,
"lsh_attn_chunk_length": 64,
"lsh_num_chunks_after": 0,
"lsh_num_chunks_before": 1,
"max_position_embeddings": 8192,
"model_type": "reformer",
"num_attention_heads": 2,
"num_buckets": [
64,
128
],
"num_chunks_after": 0,
"num_chunks_before": 1,
"num_hashes": 1,
"num_hidden_layers": 6,
"output_past": true,
"pad_token_id": 0,
"task_specific_params": {
"text-generation": {
"do_sample": true,
"max_length": 100
}
},
"vocab_size": 320
}Python代码:
config = ReformerConfig()
config.max_position_embeddings = 8192
config.axial_pos_shape=[64, 128]
#config = ReformerConfig.from_pretrained('./cnp/config.json', output_attention=True)
model = ReformerForSequenceClassification(config)
model.load_state_dict(torch.load("./cnp/pytorch_model.bin"))发布于 2020-12-16 16:08:09
我遇到了同样的问题,在默认情况下,我尝试将65536 (128512)的最大序列长度减半,用于改革者预训练。
正如@cronoik所提到的,您必须:
这些不必要的权重来自位置嵌入层。在转化器模型中,采用了轴向位置编码策略来学习位置嵌入(而不是像BERT这样的固定位置)。轴向位置编码以内存高效的方式存储位置嵌入,使用两个小张量而不是一个大张量。
但是,位置嵌入的概念完全相同,即为每个位置获取不同的嵌入。
这就是说,在理论上(如果我在某个地方误解了我),删除最后的位置嵌入以匹配您的自定义最大序列长度不应该影响性能。您可以参考此来自HuggingFace的帖子来查看对轴位置编码的更详细描述,并了解在何处截断位置嵌入张量。
我已经成功地调整和使用了自定义最大长度为32768 (128*256)的改革者,并使用了以下代码:
# Load intial pretrained model
model = ReformerForSequenceClassification.from_pretrained('google/reformer-enwik8', num_labels=2)
# Reshape Axial Position Embeddings layer to match desired max seq length
model.reformer.embeddings.position_embeddings.weights[1] = torch.nn.Parameter(model.reformer.embeddings.position_embeddings.weights[1][0][:256])
# Update the config file to match custom max seq length
model.config.axial_pos_shape = 128, 256
model.config.max_position_embeddings = 128*256 # 32768
# Save model with custom max length
output_model_path = "path/to/model"
model.save_pretrained(output_model_path)https://stackoverflow.com/questions/62603089
复制相似问题