首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >如何构建包含二元语法的gensim字典?

如何构建包含二元语法的gensim字典?
EN

Stack Overflow用户
提问于 2018-07-19 23:07:56
回答 2查看 5.7K关注 0票数 7

我正在尝试构建一个Tf-Idf模型,它可以使用gensim对双词和单词进行评分。为此,我构建了一个gensim字典,然后使用该字典创建我用来构建模型的语料库的词袋表示。

构建字典的步骤如下所示:

代码语言:javascript
复制
dict = gensim.corpora.Dictionary(tokens)

其中token是单字和双字的列表,如下所示:

代码语言:javascript
复制
[('restore',),
 ('diversification',),
 ('made',),
 ('transport',),
 ('The',),
 ('grass',),
 ('But',),
 ('distinguished', 'newspaper'),
 ('came', 'well'),
 ('produced',),
 ('car',),
 ('decided',),
 ('sudden', 'movement'),
 ('looking', 'glasses'),
 ('shapes', 'replaced'),
 ('beauties',),
 ('put',),
 ('college', 'days'),
 ('January',),
 ('sometimes', 'gives')]

但是,当我向gensim.corpora.Dictionary()提供这样的列表时,算法会将所有标记都简化为二元语法,例如:

代码语言:javascript
复制
test = gensim.corpora.Dictionary([(('happy', 'dog'))])
[test[id] for id in test]
=> ['dog', 'happy']

有没有办法用gensim生成一个包含二元语法的字典?

EN

回答 2

Stack Overflow用户

发布于 2019-01-24 18:43:49

代码语言:javascript
复制
from gensim.models import Phrases
from gensim.models.phrases import Phraser
from gensim import models



docs = ['new york is is united states', 'new york is most populated city in the world','i love to stay in new york']

token_ = [doc.split(" ") for doc in docs]
bigram = Phrases(token_, min_count=1, threshold=2,delimiter=b' ')


bigram_phraser = Phraser(bigram)

bigram_token = []
for sent in token_:
    bigram_token.append(bigram_phraser[sent])

输出将为:[['new york', 'is', 'is', 'united', 'states'],['new york', 'is', 'most', 'populated', 'city', 'in', 'the', 'world'],['i', 'love', 'to', 'stay', 'in', 'new york']]

代码语言:javascript
复制
#now you can make dictionary of bigram token 
dict_ = gensim.corpora.Dictionary(bigram_token)

print(dict_.token2id)
#Convert the word into vector, and now you can use tfidf model from gensim 
corpus = [dict_.doc2bow(text) for text in bigram_token]

tfidf_model = models.TfidfModel(corpus)
票数 6
EN

Stack Overflow用户

发布于 2019-01-28 13:39:37

在创建字典之前,您必须对您的语料库进行“短语”以检测二元语法。

我建议你在输入字典之前也对其进行词干或词法分析,下面是一个使用nltk词干分析函数的示例:

代码语言:javascript
复制
import re
from gensim.models.phrases import Phrases, Phraser
from gensim.corpora.dictionary import Dictionary
from gensim.models import TfidfModel
from nltk.stem.snowball import SnowballStemmer as Stemmer

stemmer = Stemmer("YOUR_LANG") # see nltk.stem.snowball doc

stopWords = {"YOUR_STOPWORDS_FOR_LANG"} # as a set

docs = ["LIST_OF_STR"]

def tokenize(text):
    """
    return list of str from a str
    """
    # keep lowercase alphanums and "-" but not "_"
    return [w for w in re.split(r"_+|[^\w-]+", text.lower()) if w not in stopWords]

docs = [tokenize(doc) for doc in docs]
phrases = Phrases(docs)
bigrams = Phraser(phrases)
corpus = [[stemmer.stem(w) for w in bigrams[doc]] for doc in docs]
dictionary = Dictionary(corpus)
# and here is your tfidf model:
tfidf = TfidfModel(dictionary=dictionary, normalize=True)
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/51426107

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档