我正尝试在Colab中用ktrain运行DistilBert,但是我得到了“错误,太多的值无法解包”。我正在尝试执行有毒评论分类,我从CivilComments上传了'train.csv‘,我可以运行BERT,但不能运行DistilBert
#prerequisites:
!pip install ktrain
import ktrain
from ktrain import text as txt
DATA_PATH = '/content/train.csv'
NUM_WORDS = 50000
MAXLEN = 150
label_columns = ["toxic", "severe_toxic", "obscene",
"threat", "insult", "identity_hate"]如果我只是使用'bert‘进行预处理,它可以工作得很好,但是我不能使用distilbert模型。在使用distilbert进行预处理时,我得到以下错误:
(x_test, y_test), preproc = txt.texts_from_csv(DATA_PATH, 'comment_text', label_columns=label_columns, val_filepath=None, max_features=NUM_WORDS, maxlen=MAXLEN, preprocess_mode='distilbert')“解包的值太多了,应该是2”,如果我用bert替换distilbert,它可以正常工作(代码如下),但我不得不使用bert作为模型,使用bert进行预处理工作正常:
(x_train, y_train), (x_test, y_test), preproc = txt.texts_from_csv(DATA_PATH, 'comment_text', label_columns=label_columns, val_filepath=None, max_features=NUM_WORDS, maxlen=MAXLEN, preprocess_mode='bert')这个没有错误,但我不能使用distilbert,如下所示:
示例:model = txt.text_classifier('distilbert', train_data=(x_train, y_train), preproc=preproc)错误消息:if 'bert' is selected model, then preprocess_mode='bert' should be used and vice versa
我想使用(x_test, y_test), preproc = txt.texts_from_csv(DATA_PATH, 'comment_text', label_columns=label_columns, val_filepath=None, max_features=NUM_WORDS, maxlen=MAXLEN, preprocess_mode='distilbert')和distillbert模型,如何避免错误‘值太多,无法解包’
代码所基于的链接: Arun Maiya (2019)。ktrain: Keras的轻量级包装器,用于帮助训练神经网络。https://towardsdatascience.com/ktrain-a-lightweight-wrapper-for-keras-to-help-train-neural-networks-82851ba889c。
发布于 2021-04-23 03:24:54
如this example notebook所示,当将texts_from_*指定为模型时,TransformerDataset函数将返回NumpyArrays对象(而不是preprocess_mode='distilbert' )。所以,你需要这样做:
trn, val, preproc = txt.texts_from_csv(DATA_PATH, 'comment_text', label_columns=label_columns, val_filepath=None, max_features=NUM_WORDS, maxlen=MAXLEN, preprocess_mode='distilbert')https://stackoverflow.com/questions/67218962
复制相似问题