我一直在尝试从csv文件中删除在NLTK库中找不到的停用词,但当我生成新的数据帧时,我仍然看到其中的一些单词,并且我不确定如何删除它们。我不确定我的代码有什么问题,但它是这样的:
import nltk
from nltk.tokenize import word_tokenize
from nltk.corpus
import stopwords
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
stop_words = stopwords.words('english')
print(len(stop_words))
stop_words.extend(["consist", "feature", "site", "mound", "medium", "density", "enclosure"])
def clean_review(review_text):
# review_text = re.sub(r'http\S+','',review_text)
review_text = re.sub('[^a-zA-Z]',' ',str(review_text))
review_text = str(review_text).lower()
review_text = word_tokenize(review_text)
review_text = [word for word in review_text if word not in stop_words]
#review_text = [stemmer.stem(i) for i in review_text]
review_text = [lemma.lemmatize(word=w, pos='v') for w in review_text]
review_text = [i for i in review_text if len(i) > 2]
review_text = ' '.join(review_text)
return review_text
filename['New_Column']=filename['Column'].apply(clean_review)```发布于 2020-11-20 03:51:00
在删除停用词之后,您正在对文本进行词汇化,这在某些情况下是可以的。
但是,你可能有一些词,在词汇化之后,会出现在你的停用词列表中。
请参见示例
>>> import nltk
>>> from nltk.stem import WordNetLemmatizer
>>> lemmatizer = WordNetLemmatizer()
>>> print(lemmatizer.lemmatize("sites"))
site
>>>起初,您的脚本不会删除sites,但在词条化之后,它应该删除。
https://stackoverflow.com/questions/64918506
复制相似问题