我有这个代码来计算文本的相似性与tf-以色列国防军。
from sklearn.feature_extraction.text import TfidfVectorizer
documents = [doc1,doc2]
tfidf = TfidfVectorizer().fit_transform(documents)
pairwise_similarity = tfidf * tfidf.T
print pairwise_similarity.A问题是,这段代码接受作为输入的普通字符串,我希望通过删除停止词、词干和标记来准备文档。所以输入将是一个列表。如果我用标记化的文档调用documents = [doc1,doc2]的错误是:
Traceback (most recent call last):
File "C:\Users\tasos\Desktop\my thesis\beta\similarity.py", line 18, in <module>
tfidf = TfidfVectorizer().fit_transform(documents)
File "C:\Python27\lib\site-packages\scikit_learn-0.14.1-py2.7-win32.egg\sklearn\feature_extraction\text.py", line 1219, in fit_transform
X = super(TfidfVectorizer, self).fit_transform(raw_documents)
File "C:\Python27\lib\site-packages\scikit_learn-0.14.1-py2.7-win32.egg\sklearn\feature_extraction\text.py", line 780, in fit_transform
vocabulary, X = self._count_vocab(raw_documents, self.fixed_vocabulary)
File "C:\Python27\lib\site-packages\scikit_learn-0.14.1-py2.7-win32.egg\sklearn\feature_extraction\text.py", line 715, in _count_vocab
for feature in analyze(doc):
File "C:\Python27\lib\site-packages\scikit_learn-0.14.1-py2.7-win32.egg\sklearn\feature_extraction\text.py", line 229, in <lambda>
tokenize(preprocess(self.decode(doc))), stop_words)
File "C:\Python27\lib\site-packages\scikit_learn-0.14.1-py2.7-win32.egg\sklearn\feature_extraction\text.py", line 195, in <lambda>
return lambda x: strip_accents(x.lower())
AttributeError: 'unicode' object has no attribute 'apply_freq_filter'是否有任何方法更改代码并使其接受列表,或者让我再次将标记化文档更改为字符串?
发布于 2013-08-25 23:06:18
尝试跳过预处理到小写,并提供您自己的"nop“令牌程序:
tfidf = TfidfVectorizer(tokenizer=lambda doc: doc, lowercase=False).fit_transform(documents)您还应该检查其他参数,如stop_words,以避免重复预处理。
https://stackoverflow.com/questions/18432289
复制相似问题