考虑到以下代码:
import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer
import urllib.request # the lib that handles the url stuff
from bs4 import BeautifulSoup
import unicodedata
def remove_control_characters(s):
base = ""
for ch in s:
if unicodedata.category(ch)[0]!="C":
base = base + ch.lower()
else:
base = base + " "
return base
moby_dick_url='http://www.gutenberg.org/files/2701/2701-0.txt'
soul_of_japan = 'http://www.gutenberg.org/files/12096/12096-0.txt'
def extract_body(url):
with urllib.request.urlopen(url) as s:
data = BeautifulSoup(s).body()[0].string
stripped = remove_control_characters(data)
return stripped
moby = extract_body(moby_dick_url)
bushido = extract_body(soul_of_japan)
corpus = [moby,bushido]
vectorizer = TfidfVectorizer(use_idf=False, smooth_idf=True)
tf_idf = vectorizer.fit_transform(corpus)
df_tfidf = pd.DataFrame(tf_idf.toarray(), columns=vectorizer.get_feature_names(), index=["Moby", "Bushido"])
df_tfidf[["the", "whale"]]我希望“鲸鱼”在“白鲸”中获得一个相对较高的tf-以色列国防军,但在“武士道:日本的灵魂”和“日本的灵魂”中则会获得较低的分数。然而,我却得到了相反的结果。计算结果如下:
| | the | whale |
|-------|-----------|----------|
|Moby | 0.707171 | 0.083146 |
|Bushido| 0.650069 | 0.000000 |这对我来说毫无意义。有人能指出我在思考或编码上的错误吗?
发布于 2020-01-21 20:05:34
有两个原因,你观察到这一点。
TfidfVectorizer(use_idf=True, ...),因为它将惩罚出现在所有文档中的单词(请记住,TF-下手是术语频率和反向文档频率的乘积)。通过设置TfidfVectorizer(use_idf=False, ..),您只是在考虑术语频率部分,这显然会导致停止词有更大的分数更大,所以句号再次得到更大的分数。
https://stackoverflow.com/questions/59845939
复制相似问题