我正在尝试构建一个Tf-Idf模型,它可以使用gensim对双词和单词进行评分。为此,我构建了一个gensim字典,然后使用该字典创建我用来构建模型的语料库的词袋表示。
构建字典的步骤如下所示:
dict = gensim.corpora.Dictionary(tokens)其中token是单字和双字的列表,如下所示:
[('restore',),
('diversification',),
('made',),
('transport',),
('The',),
('grass',),
('But',),
('distinguished', 'newspaper'),
('came', 'well'),
('produced',),
('car',),
('decided',),
('sudden', 'movement'),
('looking', 'glasses'),
('shapes', 'replaced'),
('beauties',),
('put',),
('college', 'days'),
('January',),
('sometimes', 'gives')]但是,当我向gensim.corpora.Dictionary()提供这样的列表时,算法会将所有标记都简化为二元语法,例如:
test = gensim.corpora.Dictionary([(('happy', 'dog'))])
[test[id] for id in test]
=> ['dog', 'happy']有没有办法用gensim生成一个包含二元语法的字典?
发布于 2019-01-24 18:43:49
from gensim.models import Phrases
from gensim.models.phrases import Phraser
from gensim import models
docs = ['new york is is united states', 'new york is most populated city in the world','i love to stay in new york']
token_ = [doc.split(" ") for doc in docs]
bigram = Phrases(token_, min_count=1, threshold=2,delimiter=b' ')
bigram_phraser = Phraser(bigram)
bigram_token = []
for sent in token_:
bigram_token.append(bigram_phraser[sent])输出将为:[['new york', 'is', 'is', 'united', 'states'],['new york', 'is', 'most', 'populated', 'city', 'in', 'the', 'world'],['i', 'love', 'to', 'stay', 'in', 'new york']]
#now you can make dictionary of bigram token
dict_ = gensim.corpora.Dictionary(bigram_token)
print(dict_.token2id)
#Convert the word into vector, and now you can use tfidf model from gensim
corpus = [dict_.doc2bow(text) for text in bigram_token]
tfidf_model = models.TfidfModel(corpus)发布于 2019-01-28 13:39:37
在创建字典之前,您必须对您的语料库进行“短语”以检测二元语法。
我建议你在输入字典之前也对其进行词干或词法分析,下面是一个使用nltk词干分析函数的示例:
import re
from gensim.models.phrases import Phrases, Phraser
from gensim.corpora.dictionary import Dictionary
from gensim.models import TfidfModel
from nltk.stem.snowball import SnowballStemmer as Stemmer
stemmer = Stemmer("YOUR_LANG") # see nltk.stem.snowball doc
stopWords = {"YOUR_STOPWORDS_FOR_LANG"} # as a set
docs = ["LIST_OF_STR"]
def tokenize(text):
"""
return list of str from a str
"""
# keep lowercase alphanums and "-" but not "_"
return [w for w in re.split(r"_+|[^\w-]+", text.lower()) if w not in stopWords]
docs = [tokenize(doc) for doc in docs]
phrases = Phrases(docs)
bigrams = Phraser(phrases)
corpus = [[stemmer.stem(w) for w in bigrams[doc]] for doc in docs]
dictionary = Dictionary(corpus)
# and here is your tfidf model:
tfidf = TfidfModel(dictionary=dictionary, normalize=True)https://stackoverflow.com/questions/51426107
复制相似问题