首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >如何离线使用列车?

如何离线使用列车?
EN

Stack Overflow用户
提问于 2020-06-02 10:40:18
回答 2查看 449关注 0票数 0

我训练我的英语模型跟随这个笔记本(https://nbviewer.jupyter.org/github/amaiya/ktrain/blob/master/tutorials/tutorial-06-sequence-tagging.ipynb)。我能够保存我的预先训练的模型,并运行它没有问题。

然而,我需要再次运行它,但是离线运行,而且它不起作用,我知道我需要下载这个文件,并做一些类似于这里所做的事情。

https://github.com/huggingface/transformers/issues/136

然而,我无法理解我需要在哪里改变火车的设置。

我负责这个:

代码语言:javascript
复制
ktrain.load_predictor('Functions/my_english_nermodel')

这就是我遇到的错误:

代码语言:javascript
复制
Traceback (most recent call last):
  File "Z:\Functions\NER.py", line 155, in load_bert
    reloaded_predictor= ktrain.load_predictor('Z:/Functions/my_english_nermodel')
  File "C:\Program Files\Python37\lib\site-packages\ktrain\core.py", line 1316, in load_predictor
    preproc = pickle.load(f)
  File "C:\Program Files\Python37\lib\site-packages\ktrain\text\ner\anago\preprocessing.py", line 76, in __setstate__
    if self.te_model is not None: self.activate_transformer(self.te_model, layers=self.te_layers)
  File "C:\Program Files\Python37\lib\site-packages\ktrain\text\ner\anago\preprocessing.py", line 100, in activate_transformer
    self.te = TransformerEmbedding(model_name, layers=layers)
  File "C:\Program Files\Python37\lib\site-packages\ktrain\text\preprocessor.py", line 1095, in __init__
    self.tokenizer = self.tokenizer_type.from_pretrained(model_name)
  File "C:\Program Files\Python37\lib\site-packages\transformers\tokenization_utils.py", line 903, in from_pretrained
    return cls._from_pretrained(*inputs, **kwargs)
  File "C:\Program Files\Python37\lib\site-packages\transformers\tokenization_utils.py", line 1008, in _from_pretrained
    list(cls.vocab_files_names.values()),
OSError: Model name 'bert-base-uncased' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, bert-base-finnish-cased-v1, bert-base-finnish-uncased-v1, bert-base-dutch-cased). We assumed 'bert-base-dutch-cased' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.

Process finished with exit code 1
EN

回答 2

Stack Overflow用户

回答已采纳

发布于 2020-06-04 13:18:34

更普遍的是,变压器-based预培训模型被下载到<home_directory>/.cache/torch/transformers中。例如,在Linux上,这将是/home/<user_name>/.cache/torch/transformers

正如上面的答案所示,要在没有internet访问的机器上重新加载ktrain predictor (对于利用来自transformers库的模型的ktrain模型),您需要将该文件夹中的模型文件复制到新机器上的相同位置。

票数 1
EN

Stack Overflow用户

发布于 2020-06-03 14:01:21

我找到了一种解决方案,当使用internet连接运行时,它会创建一个文件夹:‘C:\Users\lemolina.cache\torch\transformers’我需要在机器中复制无法访问internet的同一个文件夹

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/62150139

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档