首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >sklearn_crfsuite.CRF UnicodeEncodeError

sklearn_crfsuite.CRF UnicodeEncodeError
EN

Stack Overflow用户
提问于 2022-10-03 06:26:29
回答 1查看 43关注 0票数 0
  • python版本: 3.6
  • 操作系统: windows

我正试图用sklearn_crfsuite.CRF数据集一起训练一位中国人的NER模型。清理数据集并拟合模型后,它将显示错误消息:

代码语言:javascript
复制
60loading training data to CRFsuite:   0%|                                                                                                                                                                     | 0/700 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "main_script.py", line 22, in <module>
    crf_pipeline.model.fit(x_train, y_train, x_test, y_test)
  File "C:\Users\weber\PycharmProjects\demo-insurance-backend\venv\lib\site-packages\sklearn_crfsuite\estimator.py", line 314, in fit
    trainer.append(xseq, yseq)
  File "pycrfsuite\_pycrfsuite.pyx", line 312, in pycrfsuite._pycrfsuite.BaseTrainer.append
  File "stringsource", line 48, in vector.from_py.__pyx_convert_vector_from_py_std_3a__3a_string
  File "stringsource", line 15, in string.from_py.__pyx_convert_string_from_py_std__in_string
UnicodeEncodeError: 'ascii' codec can't encode characters in position 2-6: ordinal not in range(128)

数据格式采用.txt\n分离,OriginalText存储文本数据,entities存储实体信息。

下面是我预处理数据集的代码:

代码语言:javascript
复制
import ast
from opencc import OpenCC
import sklearn_crfsuite

from sklearn.model_selection import train_test_split
from tqdm import tqdm

tag_dictionary = {
    '影像檢查': 'I-影像檢查',
    '手術': 'S-手術',
    '實驗室檢驗': 'E-實驗室檢驗',
    '解剖部位': 'B-解剖部位',
    '疾病和診斷': 'D-疾病和診斷'
}

def check_entity(entities):
    return [
        entity
        for entity in entities
        if entity['label_type'] in tag_dictionary
    ]

def build_tag_seq(text, entities):
    tag_list = ['O' for token in text]
    for entity in entities:
        if tag_dictionary is None:
            tag = entity['label_type']
        else:
            tag = tag_dictionary[entity['label_type']]
        tag_list[entity['start_pos']] = f'{tag}-B'
        for i in range(entity['start_pos']+1, entity['end_pos']):
            tag_list[i] = f'{tag}-I'
    return tag_list

def data_coverter(data):
    cc = OpenCC('s2t')  # 轉繁體
    data_dict = ast.literal_eval(cc.convert(data))  # txt轉dict
    return data_dict

def process_data(data):
    data_dict = data_coverter(data)
    text = data_dict['originalText']
    entities = data_dict['entities']
    entities = check_entity(entities)
    tag_seq = build_tag_seq(text, entities)
    return text, tag_seq

def load_txt_data(stop=-1):
    data_x = list()  # 內文(token序列)
    data_y = list()  # 每個token的對應tag序列
    for path in ['subtask1_training_part1.txt']:
        with open(path, 'r', encoding='utf-8') as f:
            for i, line in tqdm(enumerate(f.readlines())):
                text = line.strip()
                if len(text) > 3:
                    temp_x, temp_y = process_data(text)

                    data_x.append(temp_x)
                    data_y.append(temp_y)
                    if i == stop:
                        break
    return data_x, data_y

x, y = load_txt_data()

model = sklearn_crfsuite.CRF(
    algorithm='l2sgd',
    c2=1.0,
    max_iterations=1000,
    all_possible_transitions=True,
    all_possible_states=True,
    verbose=True
)

model.fit(x, y)

下面是我使用的pkgs列表:

代码语言:javascript
复制
pip install opencc sklearn sklearn_crfsuite 

以前有没有人得到类似的错误信息并加以解决?如有任何帮助,我将不胜感激。

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2022-10-03 07:24:31

我发现我不能使用参考文献的NER标签中的中文符号。

在用值中的tag_dictionary修改int之后,它就能工作了。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/73931787

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档