运行时错误,同时使用拥抱面库在SageMaker - ml.p3.8xlarge实例中完成预先训练的SageMaker模型。
finetuning_gpt2_script.py包含以下内容,
图书馆:
from transformers import Trainer, TrainingArguments
from transformers import EarlyStoppingCallback
from transformers import GPT2LMHeadModel, GPT2Tokenizer
from transformers import TextDataset,DataCollatorForLanguageModeling预先培训的模型:
gpt2_model = GPT2LMHeadModel.from_pretrained("gpt2-medium")
gpt2_tokenizer = GPT2Tokenizer.from_pretrained("gpt2-medium")培训和测试数据建设:
train_dataset = TextDataset(
tokenizer=gpt2_tokenizer,
file_path=train_path,
block_size=128)
test_dataset = TextDataset(
tokenizer=gpt2_tokenizer,
file_path=test_path,
block_size=128)
data_collator = DataCollatorForLanguageModeling(
tokenizer=gpt2_tokenizer, mlm=False,
)train_path & test_path是大小为145万的非结构化文本文件,数据行为200 K。
培训论点:
training_args = TrainingArguments(
output_dir="./gpt2-finetuned-models", #The output directory
overwrite_output_dir=True, #overwrite the content of the output directory
num_train_epochs=1, # number of training epochs
per_device_train_batch_size=8, # batch size for training #32
per_device_eval_batch_size=8, # batch size for evaluation #64
save_steps=100, # after # steps model is saved
warmup_steps=500,# number of warmup steps for learning rate scheduler
prediction_loss_only=True,
metric_for_best_model = "eval_loss",
load_best_model_at_end = True,
evaluation_strategy="epoch",
)training_args是为训练模型而构造的训练参数。
培训师:
trainer = Trainer(
model=gpt2_model,
args=training_args,
data_collator=data_collator,
train_dataset=train_dataset,
eval_dataset=test_dataset,
callbacks = [early_stop_callback],
)
early_stop_callback = EarlyStoppingCallback(early_stopping_patience = 3)培训:
trainer.train()
trainer.save_model(model_path)这里,使用ml.p3.8xlarge实例只对4个GPUS中的一个时期进行培训。
训练是通过火炬传递完成的,如下所示
python -m torch.distributed.launch finetuning_gpt2_script.py当训练结束时,观察到下面的错误,
RuntimeError: Input tensor at index 3 has invalid shape [2, 2, 16, 128, 64] but expected [2, 4, 16, 128, 64]
RuntimeError是因为train_dataset和test_dataset使用TextData构建的方式吗?torch-distribution上做错了吗?发布于 2021-01-21 08:34:05
这可能与这里建议的批大小不匹配(期望批大小为4,但收到批大小为2)有关?提供的解决方案是在您的drop_last中设置参数DataLoader,如下所示:
tain_text = DataLoader(train_dataset, batch_size=args.batch_size, shuffle=True, drop_last=True)https://stackoverflow.com/questions/65822014
复制相似问题