我正在学习本教程,以了解trainer。https://huggingface.co/transformers/training.html
我复制了如下代码:
from datasets import load_dataset
import numpy as np
from datasets import load_metric
metric = load_metric("accuracy")
def compute_metrics(eval_pred):
logits, labels = eval_pred
predictions = np.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels)
print('Download dataset ...')
raw_datasets = load_dataset("imdb")
from transformers import AutoTokenizer
print('Tokenize text ...')
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
def tokenize_function(examples):
return tokenizer(examples["text"], padding="max_length", truncation=True)
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
print('Prepare data ...')
small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(500))
small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(500))
full_train_dataset = tokenized_datasets["train"]
full_eval_dataset = tokenized_datasets["test"]
print('Define model ...')
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=2)
print('Define trainer ...')
from transformers import TrainingArguments, Trainer
training_args = TrainingArguments("test_trainer", evaluation_strategy="epoch")
trainer = Trainer(
model=model,
args=training_args,
train_dataset=small_train_dataset,
eval_dataset=small_eval_dataset,
compute_metrics=compute_metrics,
)
print('Fine-tune train ...')
trainer.evaluate()但是,它不报告有关训练指标的任何内容,而是报告以下消息:
Download dataset ...
Reusing dataset imdb (/Users/congminmin/.cache/huggingface/datasets/imdb/plain_text/1.0.0/4ea52f2e58a08dbc12c2bd52d0d92b30b88c00230b4522801b3636782f625c5b)
Tokenize text ...
100%|██████████| 25/25 [00:06<00:00, 4.01ba/s]
100%|██████████| 25/25 [00:06<00:00, 3.99ba/s]
100%|██████████| 50/50 [00:13<00:00, 3.73ba/s]
Prepare data ...
Define model ...
Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForSequenceClassification: ['cls.seq_relationship.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.bias', 'cls.predictions.decoder.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.dense.weight']
- This IS expected if you are initializing BertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier.weight', 'classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Define trainer ...
Fine-tune train ...
100%|██████████| 63/63 [08:35<00:00, 8.19s/it]
Process finished with exit code 0教程没有更新吗?我是否应该进行一些配置更改以报告指标?
发布于 2021-05-21 19:17:41
evaluate函数返回指标,而不是打印它们。有吗?
metrics=trainer.evaluate()
print(metrics)工作?此外,消息说您使用的是基本bert模型,该模型没有为句子分类进行预训练,而是基本语言模型.Therefore它没有任务的初始权重,应该进行训练
发布于 2021-05-27 04:30:57
你为什么要做trainer.evaluate()?这只会在验证集上运行验证。如果你想微调或训练,你需要做:
trainer.train()发布于 2021-11-23 21:49:05
我认为您需要告诉培训师在TrainingArguments中使用evaluation_strategy和eval_steps评估性能的频率
https://stackoverflow.com/questions/67625349
复制相似问题