首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >大型NER训练集

大型NER训练集
EN

Stack Overflow用户
提问于 2019-10-15 15:02:09
回答 1查看 341关注 0票数 0

我有一个项目,涉及采取属性描述和标签的关键数据元素。我决定使用spaCy来训练我自己的NER管道,因为这些描述不像传统的句子。然而,当我去训练的时候,它达到了大约20%,然后坠毁,我找不到一个解释。

如何设置

下面是我的JSON示例。完整的JSON是2.6MB,包含>1000个描述,范围从40-500个令牌不等。该文件总共包含54000个令牌。(请注意,以下数据已更改为不反映实际财产)

代码语言:javascript
复制
[{
    "id": 1, "paragraphs": 
    [{
    "raw": "Lots 1 and 2 of Block 1, in the City of Santa Clarita, County of Los Angeles, State of California, as per Map recorded in Book 1, Page 1 of Miscellaneous Maps, in the Office of the County Recorder of said County "
        , "sentences": 
        [{
            "tokens": 
            [
                        {"id": 1, "orth": "Lots", "ner": "B-LOT"}
                        , {"id": 2, "orth": "1", "ner": "I-LOT"}
                        , {"id": 3, "orth": "and", "ner": "I-LOT"}
                        , {"id": 4, "orth": "2", "ner": "L-LOT"}
                        , {"id": 5, "orth": "of", "ner": "O"}
                        , {"id": 6, "orth": "Block", "ner": "B-BLOCK"}
                        , {"id": 7, "orth": "1,", "ner": "L-BLOCK"}
                        , {"id": 8, "orth": "in", "ner": "O"}
                        , {"id": 9, "orth": "the", "ner": "O"}
                        , {"id": 10, "orth": "City", "ner": "O"}
                        , {"id": 11, "orth": "of", "ner": "O"}
                        , {"id": 12, "orth": "Santa", "ner": "O"}
                        , {"id": 13, "orth": "Clarita,", "ner": "O"}
                        , {"id": 14, "orth": "County", "ner": "O"}
                        , {"id": 15, "orth": "of", "ner": "O"}
                        , {"id": 16, "orth": "Los", "ner": "O"}
                        , {"id": 17, "orth": "Angeles,", "ner": "O"}
                        , {"id": 18, "orth": "State", "ner": "O"}
                        , {"id": 19, "orth": "of", "ner": "O"}
                        , {"id": 20, "orth": "California,", "ner": "O"}
                        , {"id": 21, "orth": "as", "ner": "O"}
                        , {"id": 22, "orth": "per", "ner": "O"}
                        , {"id": 23, "orth": "Map", "ner": "O"}
                        , {"id": 24, "orth": "recorded", "ner": "O"}
                        , {"id": 25, "orth": "in", "ner": "O"}
                        , {"id": 26, "orth": "Book", "ner": "B-BOOK"}
                        , {"id": 27, "orth": "1,", "ner": "L-BOOK"}
                        , {"id": 28, "orth": "Page", "ner": "B-PAGE"}
                        , {"id": 29, "orth": "1", "ner": "L-PAGE"}
                        , {"id": 30, "orth": "of", "ner": "O"}
                        , {"id": 31, "orth": "Miscellaneous", "ner": "B-MAPTYPE"}
                        , {"id": 32, "orth": "Maps,", "ner": "L-MAPTYPE"}
                        , {"id": 33, "orth": "in", "ner": "O"}
                        , {"id": 34, "orth": "the", "ner": "O"}
                        , {"id": 35, "orth": "Office", "ner": "O"}
                        , {"id": 36, "orth": "of", "ner": "O"}
                        , {"id": 37, "orth": "the", "ner": "O"}
                        , {"id": 38, "orth": "County", "ner": "O"}
                        , {"id": 39, "orth": "Recorder", "ner": "O"}
                        , {"id": 40, "orth": "of", "ner": "O"}
                        , {"id": 41, "orth": "said", "ner": "O"}
                        , {"id": 42, "orth": "County", "ner": "O"}
            ]
        }]
    }]
}]

我使用了cli文件夹中的Train.py文件,并为这个过程创建了自己的版本。我保留了文件的核心功能,只添加了一些内容,比如数据集的一些新标签,以及一个使用空白而不是常规标记器的自定义标记器。职能如下:

代码语言:javascript
复制
def NERTrain(lang
          , output_dir
          , train_data
          , dev_data
          , n_iter=30
          , n_sents=0
          , parser_multitasks=''
          , entity_multitasks=''
          , use_gpu=-1
          , vectors=None
          , gold_preproc=False
          , version="0.0.0"
          , meta_path=None
          , verbose=False
          , newLabels = None):
    """
    Train a model. Expects data in spaCy's JSON format.
    """
    util.fix_random_seed()
    util.set_env_log(True)
    n_sents = n_sents or None
    output_path = util.ensure_path(output_dir)
    train_path = util.ensure_path(train_data)
    dev_path = util.ensure_path(dev_data)
    meta_path = util.ensure_path(meta_path)
    if not output_path.exists():
        output_path.mkdir()
    if not train_path.exists():
        prints(train_path, title=Messages.M050, exits=1)
    if dev_path and not dev_path.exists():
        prints(dev_path, title=Messages.M051, exits=1)
    if meta_path is not None and not meta_path.exists():
        prints(meta_path, title=Messages.M020, exits=1)
    meta = util.read_json(meta_path) if meta_path else {}
    if not isinstance(meta, dict):
        prints(Messages.M053.format(meta_type=type(meta)),
               title=Messages.M052, exits=1)
    meta.setdefault('lang', lang)
    meta.setdefault('name', 'unnamed')

    pipeline = ['ner']

    # Take dropout and batch size as generators of values -- dropout
    # starts high and decays sharply, to force the optimizer to explore.
    # Batch size starts at 1 and grows, so that we make updates quickly
    # at the beginning of training.
    dropout_rates = util.decaying(util.env_opt('dropout_from', 0.2),
                                  util.env_opt('dropout_to', 0.2),
                                  util.env_opt('dropout_decay', 0.0))
    batch_sizes = util.compounding(util.env_opt('batch_from', 1),
                                   util.env_opt('batch_to', 16),
                                   util.env_opt('batch_compound', 1.001))
    max_doc_len = util.env_opt('max_doc_len', 5000)
    corpus = GoldCorpus(train_path, dev_path, limit=n_sents)
    n_train_words = corpus.count_train()

    lang_class = util.get_lang_class(lang)
    nlp = lang_class()

    if "ner" in nlp.pipe_names:
        nlp.remove_pipe("ner")

    ner = nlp.create_pipe("ner")
    nlp.add_pipe(ner, first=True)

    meta['pipeline'] = pipeline
    nlp.meta.update(meta)
    if vectors:
        print("Load vectors model", vectors)
        util.load_model(vectors, vocab=nlp.vocab)
        for lex in nlp.vocab:
            values = {}
            for attr, func in nlp.vocab.lex_attr_getters.items():
                # These attrs are expected to be set by data. Others should
                # be set by calling the language functions.
                if attr not in (CLUSTER, PROB, IS_OOV, LANG):
                    values[lex.vocab.strings[attr]] = func(lex.orth_)
            lex.set_attrs(**values)
            lex.is_oov = False
#    for name in pipeline:
#        nlp.add_pipe(nlp.create_pipe(name), name=name)
    if parser_multitasks:
        for objective in parser_multitasks.split(','):
            nlp.parser.add_multitask_objective(objective)
    if entity_multitasks:
        for objective in entity_multitasks.split(','):
            nlp.entity.add_multitask_objective(objective)
    optimizer = nlp.begin_training(lambda: corpus.train_tuples, device=use_gpu)
    nlp._optimizer = None
    nlp.tockenizer=WTok(nlp)
    other_pipes = [pipe for pipe in nlp.pipe_names if pipe != "ner"]

    if(newLabels != None):
        for l in newLabels:
            ner.add_label(l)

    print("Itn.  Dep Loss  NER Loss  UAS     NER P.  NER R.  NER F.  Tag %   Token %  CPU WPS  GPU WPS")
    try:
        train_docs = corpus.train_docs(nlp, projectivize=True, noise_level=0.0,
                                       gold_preproc=gold_preproc, max_length=0)
        train_docs = list(train_docs)
        with nlp.disable_pipes(*other_pipes):
            for i in range(n_iter):
                with tqdm.tqdm(total=n_train_words, leave=False) as pbar:
                    losses = {}
                    for batch in minibatch(train_docs, size=batch_sizes):
                        batch = [(d, g) for (d, g) in batch if len(d) < max_doc_len]
                        if not batch:
                            continue
                        docs, golds = zip(*batch)
                        nlp.update(docs, golds, sgd=optimizer,
                                   drop=next(dropout_rates), losses=losses)
                        pbar.update(sum(len(doc) for doc in docs))

                with nlp.use_params(optimizer.averages):
                    util.set_env_log(False)
                    epoch_model_path = output_path / ('model%d' % i)
                    nlp.to_disk(epoch_model_path)
                    nlp_loaded = util.load_model_from_path(epoch_model_path)
                    dev_docs = list(corpus.dev_docs(
                                    nlp_loaded,
                                    gold_preproc=gold_preproc))
                    nwords = sum(len(doc_gold[0]) for doc_gold in dev_docs)
                    start_time = timer()
                    scorer = nlp_loaded.evaluate(dev_docs, verbose)
                    end_time = timer()
                    if use_gpu < 0:
                        gpu_wps = None
                        cpu_wps = nwords/(end_time-start_time)
                    else:
                        gpu_wps = nwords/(end_time-start_time)
                        with Model.use_device('cpu'):
                            nlp_loaded = util.load_model_from_path(epoch_model_path)
                            dev_docs = list(corpus.dev_docs(
                                            nlp_loaded, gold_preproc=gold_preproc))
                            start_time = timer()
                            scorer = nlp_loaded.evaluate(dev_docs)
                            end_time = timer()
                            cpu_wps = nwords/(end_time-start_time)
                    acc_loc = (output_path / ('model%d' % i) / 'accuracy.json')
                    with acc_loc.open('w') as file_:
                        file_.write(json_dumps(scorer.scores))
                    meta_loc = output_path / ('model%d' % i) / 'meta.json'
                    meta['accuracy'] = scorer.scores
                    meta['speed'] = {'nwords': nwords, 'cpu': cpu_wps,
                                     'gpu': gpu_wps}
                    meta['vectors'] = {'width': nlp.vocab.vectors_length,
                                       'vectors': len(nlp.vocab.vectors),
                                       'keys': nlp.vocab.vectors.n_keys}
                    meta['lang'] = nlp.lang
                    meta['pipeline'] = pipeline
                    meta['spacy_version'] = '>=%s' % about.__version__
                    meta.setdefault('name', 'model%d' % i)
                    meta.setdefault('version', version)

                    with meta_loc.open('w') as file_:
                        file_.write(json_dumps(meta))
                    util.set_env_log(True)
                print_progress(i, losses, scorer.scores, cpu_wps=cpu_wps,
                               gpu_wps=gpu_wps)
    finally:
        print("Saving model...")
        with nlp.use_params(optimizer.averages):
            final_model_path = output_path / 'model-final'
            nlp.to_disk(final_model_path)

我做了什么,

当尝试运行完整的JSON文件失败时,我也尝试使用一个较小的示例100。这一过程能够在没有任何问题的情况下一直进行下去。现在,在我将数据集切分成100块(我真的不想/不应该这么做)之前,我想看看是否有人能看一看,看看这是否是我碰到的空间限制,2.内存问题,3或者我忽略的某种代码问题。

请注意,此过程正在我的本地机器上运行,其规格如下:

PC规范

Windows 10

英特尔(R) Core(TM) i7-6600U CPU @ 2.6GHz 2.81 GHz

16.0 GB Ram

Python 3.7.4

spaCy 2.0.16

如有任何帮助,将不胜感激。

编辑1:

在我问了这个问题之后,我想,在同时,我会尝试用100小批文件来处理我的文件。有趣的是,其中一个文件导致进程崩溃。我立即认为这是一个数据问题,所以我添加了一个“打印”的培训功能,以便我可以看到是哪个文本造成它。但是在我添加了“打印”之后,文件就完成了,没有出错。我不知道如何解释这一点,只是提供了一些补充资料。

编辑2:

我终于得到了一条与坠机相关的错误消息。在python.exe中0x00007FF8EB9E2BE2 (ner.cp37-win_amd64.pyd)处未处理的异常: 0xC0000005:访问冲突读取位置0x000001C4213D1FE4。在invoke_main()上标记了错误,在exe_common.inl中,我试图找到有关此错误的更多信息,但只发现了很少的信息。似乎是某种Windows错误?任何帮助都是非常感谢的。

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2019-10-24 14:57:50

最后,结果证明这是斯派西的一些依赖者之间的版本不兼容。这似乎是由几个旧版本和更新版本的spaCy的卸载和重新安装造成的。我得到了一个新的环境,只安装了最新版本的spaCy,一切都很好。如果您正在使用,我将不信任UI中的包安装程序。它似乎是链接到较早的版本,你会更好地使用PIP从终端。

票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/58397661

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档