首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >计数器()和most_common

计数器()和most_common
EN

Stack Overflow用户
提问于 2021-02-21 17:12:10
回答 1查看 288关注 0票数 2

我使用一个计数器()来计数excel文件中的单词。我的目标是从文档中获取最常用的单词。计数器()与我的文件不能正常工作。以下是代码:

代码语言:javascript
复制
#1. Building a Counter with bag-of-words

import pandas as pd
df = pd.read_excel('combined_file.xlsx', index_col=None)
import nltk

from nltk.tokenize import word_tokenize

# Tokenize the article: tokens
df['tokens'] = df['body'].apply(nltk.word_tokenize)

# Convert the tokens into string values
df_tokens_list = df.tokens.tolist()

# Convert the tokens into lowercase: lower_tokens
lower_tokens = [[string.lower() for string in sublist] for sublist in df_tokens_list]

# Import Counter

from collections import Counter

# Create a Counter with the lowercase tokens: bow_simple

bow_simple = Counter(x for xs in lower_tokens for x in set(xs))

# Print the 10 most common tokens
print(bow_simple.most_common(10))

#2. Text preprocessing practice

# Import WordNetLemmatizer

from nltk.stem import WordNetLemmatizer

# Retain alphabetic words: alpha_only
alpha_only = [t for t in bow_simple if t.isalpha()]

# Remove all stop words: no_stops 
from nltk.corpus import stopwords

no_stops = [t for t in alpha_only if t not in stopwords.words("english")]

# Instantiate the WordNetLemmatizer
wordnet_lemmatizer = WordNetLemmatizer()

# Lemmatize all tokens into a new list: lemmatized
lemmatized = [wordnet_lemmatizer.lemmatize(t) for t in no_stops]

# Create the bag-of-words: bow
bow = Counter(lemmatized)
print(bow)
# Print the 10 most common tokens
print(bow.most_common(10))

预处理后最常见的单词是:

[('dry', 3), ('try', 3), ('clean', 3), ('love', 2), ('one', 2), ('serum', 2), ('eye', 2), ('boot', 2), ('woman', 2), ('cream', 2)]

如果我们在excel中手工计算这些单词,这是不正确的。你知道我的代码可能出了什么问题吗?我希望在这方面提供任何帮助。

指向该文件的链接如下:https://www.dropbox.com/scl/fi/43nu0yf45obbyzprzc86n/combined_file.xlsx?dl=0&rlkey=7j959kz0urjxflf6r536brppt

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2021-02-21 18:10:52

问题是bow_simple值是一个计数器,您将进一步处理该计数器。这意味着所有的项目只会在列表中出现一次,最终的结果仅仅是计算当降低和用nltk处理时计数器中出现多少个单词的变化。解决方案是创建一个扁平的feed列表,并将其输入alpha_only

代码语言:javascript
复制
# Create a Counter with the lowercase tokens: bow_simple
wordlist = [item for sublist in lower_tokens for item in sublist] #flatten list of lists
bow_simple = Counter(wordlist)

然后在alpha_only中使用wordlist:

代码语言:javascript
复制
alpha_only = [t for t in wordlist if t.isalpha()]

输出:

代码语言:javascript
复制
[('eye', 3617), ('product', 2567), ('cream', 2278), ('skin', 1791), ('good', 1081), ('use', 1006), ('really', 984), ('using', 928), ('feel', 798), ('work', 785)]
票数 2
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/66304912

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档