基于测量不精确的从数据中删除类似的重复
我正在努力解决Python中用于过滤数据副本的一个新问题。我特别在寻找一种可能性,可以在超过100行和超过25列的大数据上使用它。
使用下面的Dataframe简化为一个简单的示例:
>>> df
a b c d
0 1.764052 0.400157 0.978738 2.240893
1 1.764052 0.400157 0.978738 2.240893
2 -0.103219 0.410599 0.144044 1.454274
3 0.761038 0.121675 0.443863 0.333674
4 -0.103219 0.410599 0.144044 1.454274
5 1.230291 1.202380 -0.387327 -0.302303
6 1.230291 1.202380 -0.387327 -0.302303
7 1.532779 1.469359 0.154947 0.378163
8 1.230291 1.202380 -0.387327 -0.302303
9 1.230291 1.202380 -0.387327 -0.302303
>>> df1 = df.drop_duplicates()
a b c d
0 1.764052 0.400157 0.978738 2.240893
2 -0.103219 0.410599 0.144044 1.454274
3 0.761038 0.121675 0.443863 0.333674
4 -0.103219 0.410600 0.144044 1.454274
5 1.240291 1.202380 -0.387327 -0.302303
7 1.532779 1.469359 0.154947 0.378163
8 1.230291 1.202380 -0.387327 -0.302303
>>> df2 = df. spezial code ?
a b c d
0 1.764052 0.400157 0.978738 2.240893
2 -0.103219 0.410599 0.144044 1.454274
3 0.761038 0.121675 0.443863 0.333674
5 1.240291 1.202380 -0.387327 -0.302303
7 1.532779 1.469359 0.154947 0.378163
8 1.230291 1.202380 -0.387327 -0.302303因此,drop.duplicates()在pandas中是非常高效和超级快,工作非常好。但它只过滤完全相同的复制。但是为了使数据最小化,看看测量的不精确性,我也想删除数据,这是相似的和基于定义的测量不准确相同。
因此,还应该删除第4行,这与列c中的第2行“几乎相同”。
另一方面,它应该保持第8行,这是类似于第5排(在A列),但在测量不准确。
遵循解决小数据问题的可能性,但不幸的是,在大数据上缓慢地工作是一种方法。
tolerances = {'a':0.001,
'b':0.5,
'c':0.5,
'd':0.05}
df_clean = pd.DataFrame(columns=df.columns.to_list())
df_clean = df_clean.append(df.iloc[1])
for i in range(df.shape[0]):
for j in range(df_clean.shape[0]):
m = 0
for key in tolerances:
if ((df.iloc[i].loc[key] <= df_clean.iloc[j].loc[key]+tolerances[key]) and (df.iloc[i].loc[key] >= df_clean.iloc[j].loc[key]-tolerances[key])):
m = m+1
else:
break
if m == len(tolerances):
break
if j == (df_clean.shape[0]-1):
df_clean = df_clean.append(df.iloc[i])
df_clean.sort_index(inplace=True)
>>> print(df_clean)
a b c d
0 1.764052 0.400157 0.978738 2.240893
1 -0.103219 0.410599 0.144044 1.454274
2 0.761038 0.121675 0.443863 0.333674
4 1.240291 1.202380 -0.387327 -0.302303
5 1.532779 1.469359 0.154947 0.378163
6 1.230291 1.202380 -0.387327 -0.302303发布于 2020-04-08 15:42:49
以下是您的输入数据:
from scipy.spatial.distance import pdist, squareform
import numpy as np
import pandas as pd
data = {'a': {0: '1.764052', 1: '-0.103219', 2: '0.761038', 3: '-0.103219', 4: '1.240291', 5: '1.532779', 6: '1.230291'}, 'b': {0: '0.400157', 1: '0.410599', 2: '0.121675', 3: '0.410600', 4: '1.202380', 5: '1.469359', 6: '1.202380'}, 'c': {0: '0.978738', 1: '0.144044', 2: '0.443863', 3: '0.144044', 4: '-0.387327', 5: '0.154947', 6: '-0.387327'}, 'd': {0: '2.240893', 1: '1.454274', 2: '0.333674', 3: '1.454274', 4: '-0.302303', 5: '0.378163', 6: '-0.302303'}}
df = pd.DataFrame(data, columns=["a", "b", "c", "d"])
tolerances = {'a': 0.001, 'b': 0.5, 'c': 0.5, 'd': 0.05}
tolerances_values = np.fromiter(tolerances.values(), dtype=float)
>>> print(df)
a b c d
0 1.764052 0.400157 0.978738 2.240893
1 -0.103219 0.410599 0.144044 1.454274
2 0.761038 0.121675 0.443863 0.333674
3 -0.103219 0.410600 0.144044 1.454274
4 1.240291 1.202380 -0.387327 -0.302303
5 1.532779 1.469359 0.154947 0.378163
6 1.230291 1.202380 -0.387327 -0.302303您希望根据所提供的距离删除足够相似的行:行之间的差异不得大于在tolerances中定义的值。
from scipy.spatial.distance import pdist, squareform
# Define your similarity function between rows.
def is_similar(x, y):
"""
Returns True if x is similar to y, False else
"""
diffs = np.abs(y-x) # Look at absolute differences
similar = all(diffs <= tolerances_values) # True if all columns diffs are within tolerances
return bool(similar)
# Compute similarities on all your dataframe
similarity_values = pdist(df.to_numpy(), is_similar)
# Convert np.array() into a pd.DataFrame()
similarity_df = pd.DataFrame(squareform(similarity_values), index=df.index, columns= df.index)
# Get indices of similar rows
similar_indices = similarity_df[similarity_df == True].stack().index.tolist()
# Remove symmetric indices (from i,j i,i and j,i only keep i,j)
similar_indices = [sorted(tpl) for tpl in similar_indices if tpl[0] < tpl[1]]
# Flatten
similar_indices = list(set([item for tpl in similar_indices for item in tpl])) 现在你开始:
>>> df[~df.index.isin(similar_indices)]
a b c d
0 1.764052 0.400157 0.978738 2.240893
2 0.761038 0.121675 0.443863 0.333674
4 1.240291 1.202380 -0.387327 -0.302303
5 1.532779 1.469359 0.154947 0.378163
6 1.230291 1.202380 -0.387327 -0.302303使用 cosine_similarity example 的其他示例过时
定义一个函数来计算相似性并检索相似度高于阈值的索引:
from sklearn.metrics.pairwise import cosine_similarity # any other can be used
def remove_similar(df, distance, threshold):
distance_df = cosine_similarity(df)
similar_indices = [(x,y) for (x,y) in np.argwhere(distance_df>threshold) if x != y]
similar_indices = list(set([item for tpl in similar_indices for item in tpl]))
return df[~df.index.isin(similar_indices)]现在您可以尝试使用distance=cosine_similarity并使用阈值:
>>> remove_similar(df, cosine_similarity, 0.9)
a b c d
0 1.764052 0.400157 0.978738 2.240893
2 0.761038 0.121675 0.443863 0.333674
5 1.532779 1.469359 0.154947 0.378163
>>> remove_similar(df, cosine_similarity, 0.9999999)
a b c d
0 1.764052 0.400157 0.978738 2.240893
2 0.761038 0.121675 0.443863 0.333674
4 1.240291 1.202380 -0.387327 -0.302303
5 1.532779 1.469359 0.154947 0.378163
6 1.230291 1.202380 -0.387327 -0.302303https://stackoverflow.com/questions/61103481
复制相似问题