我想用Huggingface Transformers来实现一个聊天机器人。目前,我的代码如下所示。转换器模型已经考虑了过去用户输入的历史。
在构建聊天机器人时,我还需要考虑其他因素(额外的代码)吗?
其次,如何修改代码以使用TensorFlow而不是PyTorch运行?
稍后,我还计划对其他数据的模型进行微调。我还计划测试不同的模型,比如BlenderBot和GPT2。我认为要测试这些不同的模型,应该和替换AutoTokenizer.from_pretrained("microsoft/DialoGPT-small")和AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-small")中的相应模型一样简单
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-small")
model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-small")
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))发布于 2021-11-21 16:53:27
下面是在Tensorflow中使用DialoGPT模型的一个示例:
from transformers import TFAutoModelForCausalLM, AutoTokenizer, BlenderbotTokenizer, TFBlenderbotForConditionalGeneration
import tensorflow as tf
chat_bots = {
'BlenderBot': [BlenderbotTokenizer.from_pretrained('facebook/blenderbot-400M-distill'), TFT5ForConditionalGeneration.from_pretrained('facebook/blenderbot-400M-distill')],
'DialoGPT': [AutoTokenizer.from_pretrained("microsoft/DialoGPT-small"), TFAutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-small")],
}
key = 'DialoGPT'
tokenizer, model = chat_bots[key]
for step in range(5):
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='tf')
if step > 0:
bot_input_ids = tf.concat([chat_history_ids, new_user_input_ids], axis=-1)
else:
bot_input_ids = new_user_input_ids
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
print(key + ": {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))>> User:How are you?
DialoGPT: I'm here
>> User:Why are you here
DialoGPT: I'm here
>> User:But why
DialoGPT: I'm here
>> User:Where is here
DialoGPT: Where is where?
>> User:Here
DialoGPT: Where is here?如果你想比较不同的聊天机器人,你可能想要调整它们的解码器参数,因为它们并不总是相同的。例如,使用BlenderBot和max_length 50,您会得到当前代码的这种响应:
>> User:How are you?
BlenderBot: ! I am am great! how how how are are are???一般来说,你应该问问自己,对于聊天机器人来说,哪些特殊字符是重要的(取决于您的域),哪些字符应该/可以省略?
您还应该尝试不同的解码方法,如贪婪搜索、波束搜索、随机采样、top-k采样和核采样,并找出最适合您的用例的解码方法。有关此主题的更多信息,请查看此post
https://stackoverflow.com/questions/70055966
复制相似问题