我试着用TF-以色列国防军计算消息数据的单词频率。到目前为止我有这个
import nltk
from sklearn.feature_extraction.text import TfidfVectorizer
new_group['tokenized_sents'] = new_group.apply(lambda row: nltk.word_tokenize(row['message']),axis=1).astype(str).lower()
vectoriser=TfidfVectorizer()
new_group['tokenized_vector'] = list(vectoriser.fit_transform(new_group['tokenized_sents']).toarray())然而,使用上面的代码,我得到了一堆零,而不是单词频率。如何解决这个问题以获得消息的正确编号。这是我的数据
user_id date message tokenized_sents tokenized_vector
X35WQ0U8S 2019-02-17 Need help ['need','help'] [0.0,0.0]
X36WDMT2J 2019-03-22 Thank you! ['thank','you','!'] [0.0,0.0,0.0]发布于 2020-03-11 18:09:42
首先,对于计数,您不想使用TfidfVectorizer,因为它是规范化的。您想要使用CountVectorizer。其次,您不需要标记单词,因为sklearn在TfidfVectorizer和CountVectorizer的令牌程序中都有一个构建。
#add whatever settings you want
countVec =CountVectorizer()
#fit transform
cv = countVec.fit_transform(df['message'].str.lower())
#feature names
cv_feature_names = countVec.get_feature_names()
#feature counts
feature_count = cv.toarray().sum(axis = 0)
#feature name to count
dict(zip(cv_feature_names, feature_count)) https://stackoverflow.com/questions/60641588
复制相似问题