首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >如何简化文本含义相同但不精确的大数据集的文本比较-文本数据去重

如何简化文本含义相同但不精确的大数据集的文本比较-文本数据去重
EN

Stack Overflow用户
提问于 2020-09-04 12:50:09
回答 1查看 66关注 0票数 0

我有大约180万条记录的文本数据集(不同的菜单项,如巧克力,蛋糕,可乐等),属于6个不同的类别(类别A,B,C,D,E,F)。其中一个类别大约有70万条记录。大多数菜单项都混在不属于它们的多个类别中,例如:蛋糕属于类别“A”,但也可以在类别“B”和“C”中找到。

我想识别那些错误分类的项目,并向人员报告,但挑战是项目名称并不总是正确的,因为它完全是人工键入的文本。例如:巧克力可能会被更新为热巧克力,甜巧克力,巧克力等。也可以有巧克力蛋糕;)

因此,为了处理这个问题,我尝试了一种简单的方法,使用余弦相似度对类别进行比较并识别出这些异常,但这需要花费大量时间,因为我要将每个项目与180万条记录进行比较(示例代码如下所示)。有没有人能建议一个更好的方法来解决这个问题?

代码语言:javascript
复制
#Function
from nltk.corpus import stopwords 
from nltk.tokenize import word_tokenize 

def cos_similarity(a,b):
    X =a
    Y =b

    # tokenization 
    X_list = word_tokenize(X)  
    Y_list = word_tokenize(Y) 

    # sw contains the list of stopwords 
    sw = stopwords.words('english')  
    l1 =[];l2 =[] 

    # remove stop words from the string 
    X_set = {w for w in X_list if not w in sw}  
    Y_set = {w for w in Y_list if not w in sw} 

    # form a set containing keywords of both strings  
    rvector = X_set.union(Y_set)  
    for w in rvector: 
        if w in X_set: l1.append(1) # create a vector 
        else: l1.append(0) 
        if w in Y_set: l2.append(1) 
        else: l2.append(0) 
    c = 0

    # cosine formula  
    for i in range(len(rvector)): 
            c+= l1[i]*l2[i] 
    if float((sum(l1)*sum(l2))**0.5)>0:
        cosine = c / float((sum(l1)*sum(l2))**0.5) 
    else:
        cosine = 0
    return cosine

#Base code
cos_sim_list = []
for i in category_B.index:
    ln_cosdegree = 0
    ln_degsem = []
    for j in category_A.index:
        ln_j = str(category_A['item_name'][j])
        ln_i = str(category_B['item_name'][i])
        degreeOfSimilarity = cos_similarity(ln_j,ln_i)
        if degreeOfSimilarity>0.5:
            cos_sim_list.append([ln_j,ln_i,degreeOfSimilarity])

考虑文本已被清除

EN

回答 1

Stack Overflow用户

发布于 2020-09-07 03:13:46

我使用了KNeighbor和余弦相似度来解决这个问题。虽然我多次运行代码以逐个类别进行比较,但它仍然很有效,因为类别的数量较少。如果有更好的解决方案,请给我建议

代码语言:javascript
复制
cat_A_clean = category_A['item_name'].unique()

print('Vecorizing the data - this could take a few minutes for large datasets...')
vectorizer = TfidfVectorizer(min_df=1, analyzer=ngrams, lowercase=False)
tfidf = vectorizer.fit_transform(cat_A_clean)
print('Vecorizing completed...')

from sklearn.neighbors import NearestNeighbors
nbrs = NearestNeighbors(n_neighbors=1, n_jobs=-1).fit(tfidf)

unique_B = set(category_B['item_name'].values) 

def getNearestN(query):
    queryTFIDF_ = vectorizer.transform(query)
    distances, indices = nbrs.kneighbors(queryTFIDF_)
    return distances, indices

import time
t1 = time.time()
print('getting nearest n...')
distances, indices = getNearestN(unique_B)
t = time.time()-t1
print("COMPLETED IN:", t)

unique_B = list(unique_B) 
print('finding matches...')
matches = []
for i,j in enumerate(indices):
    temp = [round(distances[i][0],2), cat_A_clean['item_name'].values[j],unique_B[i]]
    matches.append(temp)

print('Building data frame...')  
matches = pd.DataFrame(matches, columns=['Match confidence (lower is better)','ITEM_A','ITEM_B'])
print('Done') 

def clean_string(text):
        text = str(text)
        text = text.lower()
        return(text)
def cosine_sim_vectors(vec1,vec2):
    vec1 = vec1.reshape(1,-1)
    vec2 = vec2.reshape(1,-1)
    return cosine_similarity(vec1,vec2)[0][0]

def cos_similarity(sentences):
    cleaned = list(map(clean_string,sentences))
    print(cleaned)
    vectorizer = CountVectorizer().fit_transform(cleaned)
    vectors = vectorizer.toarray()
    print(vectors) 
    return(cosine_sim_vectors(vectors[0],vectors[1]))

cos_sim_list =[]
for ind in matches.index:
    a = matches['Match confidence (lower is better)'][ind]
    b = matches['ITEM_A'][ind]
    c = matches['ITEM_B'][ind]
    degreeOfSimilarity = cos_similarity([b,c])
    cos_sim_list.append([a,b,c,degreeOfSimilarity])
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/63735023

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档