我正试图在QA的拥抱面上使用这个模型。它的代码在链接中:
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/roberta-base-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
print(res)
>>>
{'score': 0.2117144614458084,
'start': 59,
'end': 84,
'answer': 'gives freedom to the user'}然而,我不知道如何得到一个损失,所以我可以完善这个模型。我正在查看拥抱脸教程,但除了在链接中使用Trainer方法或其他培训方法(这不是QA)之外,没有看到其他任何东西:
import torch
from transformers import AdamW, AutoTokenizer, AutoModelForSequenceClassification
# Same as before
checkpoint = "bert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSequenceClassification.from_pretrained(checkpoint)
sequences = [
"I've been waiting for a HuggingFace course my whole life.",
"This course is amazing!",
]
batch = tokenizer(sequences, padding=True, truncation=True, return_tensors="pt")
# This is new
batch["labels"] = torch.tensor([1, 1])
optimizer = AdamW(model.parameters())
loss = model(**batch).loss
loss.backward()
optimizer.step()假设真正的答案是freedom to the user而不是gives freedom to the user
发布于 2022-10-02 17:32:40
你不必承受损失。Hugginface中有Trainer类,您可以使用它来训练您的模型。它还为拥抱面模型进行了优化,并包含了您可能感兴趣的许多不同类型的深度学习最佳实践。见此处:课程/培训师
https://stackoverflow.com/questions/73927835
复制相似问题