我在一个文件夹中有一堆csvs,格式如下:
chunk timecodes chunk_completed chunk_id diffs_avg sd
0 [53] [[45930]] [45930] 53
1 [53, 50] [[45930], [46480]] [46480] 53-50 550.0
2 [53, 50, 63] [[45930], [46480], [47980]] [47980] 53-50-63 1025.0 671.7514421272201
3 [53, 50, 63, 60] [[45930], [46480], [47980], [49360]] [49360] 53-50-63-60 1143.3333333333333 517.3329037798903
4 [53, 50, 63, 60, 73] [[45930], [46480], [47980], [49360], [50040]] [50040] 53-50-63-60-73 1027.5 481.75893003313035
5 [53, 50, 63, 60, 73, 70] [[45930], [46480], [47980], [49360], [50040], [50310]] [50310] 53-50-63-60-73-70 876.0 537.4290650867331
6 [50] [[46480]] [46480] 50
7 [50, 63] [[46480], [47980]] [47980] 50-63 1500.0
8 [50, 63, 60] [[46480], [47980], [49360]] [49360] 50-63-60 1440.0 84.8528137423857
9 [50, 63, 60, 73] [[46480], [47980], [49360], [50040]] [50040] 50-63-60-73 1186.6666666666667 442.86943147313我把它们读成DF的,并列出DF的清单:
csvs = []
list_of_files = [i for i in glob.glob('*.{}'.format('csv'))]
for file in list_of_files:
f = pd.read_csv(file)
csvs.append(f)我想要做的是将它们简化为一个不重复"chunk_id“的数据格式。相反,我想在这个ID上合并。
我试过这个:
from functools import reduce
red = reduce(pd.merge, csvs)这给了我一个没有条目的非常宽的数据格式。
我还没有尝试平均,但我希望最终得到一个数据same,它的列与上面的示例完全相同,但是所有具有相同"chunk_id“的数据格式中的每一行都被合并,但是它们的"diffs_avg”、"timecodes“、"chunk_completed”和"sd“列的平均值。
因此,如果我读过以下dfs:
DF1
chunk timecodes chunk_completed chunk_id diffs_avg sd
[60 62] [100, 200] 500 60-62 2 1
[58 53] [800, 900] 1000 58-53 4 6DF2
chunk timecodes chunk_completed chunk_id diffs_avg sd
[60 62] [200, 400] 1000 60-62 4 2
[30 33] [200, 700] 800 30-33 6 7结果:
*[60 62] [150, 300] 750 60-62 3 1.5*
[58 53] [800, 900] 1000 58-53 4 6
[30 33] [200, 700] 800 30-33 6 7可重复性DF:
{'chunk': {0: '[53]',
1: '[53, 50]',
2: '[53, 50, 63]',
3: '[53, 50, 63, 60]',
4: '[53, 50, 63, 60, 73]',
5: '[53, 50, 63, 60, 73, 70]',
6: '[50]',
7: '[50, 63]',
8: '[50, 63, 60]',
9: '[50, 63, 60, 73]'},
'chunk_completed': {0: '[45930]',
1: '[46480]',
2: '[47980]',
3: '[49360]',
4: '[50040]',
5: '[50310]',
6: '[46480]',
7: '[47980]',
8: '[49360]',
9: '[50040]'},
'chunk_id': {0: '53',
1: '53-50',
2: '53-50-63',
3: '53-50-63-60',
4: '53-50-63-60-73',
5: '53-50-63-60-73-70',
6: '50',
7: '50-63',
8: '50-63-60',
9: '50-63-60-73'},
'diffs_avg': {0: np.nan,
1: 550.0,
2: 1025.0,
3: 1143.3333333333333,
4: 1027.5,
5: 876.0,
6: np.nan,
7: 1500.0,
8: 1440.0,
9: 1186.6666666666667},
'sd': {0: np.nan,
1: np.nan,
2: 671.7514421272201,
3: 517.3329037798903,
4: 481.75893003313035,
5: 537.4290650867331,
6: np.nan,
7: np.nan,
8: 84.8528137423857,
9: 442.86943147313},
'timecodes': {0: '[[45930]]',
1: '[[45930], [46480]]',
2: '[[45930], [46480], [47980]]',
3: '[[45930], [46480], [47980], [49360]]',
4: '[[45930], [46480], [47980], [49360], [50040]]',
5: '[[45930], [46480], [47980], [49360], [50040], [50310]]',
6: '[[46480]]',
7: '[[46480], [47980]]',
8: '[[46480], [47980], [49360]]',
9: '[[46480], [47980], [49360], [50040]]'}}发布于 2019-03-26 10:43:43
在不知道timecodes列及其类型的情况下,可以使用pandas.DataFrame.groupby和.agg在chunk_id上对其余列进行平均值。
# First of all you should concat your csv's into one big dataframe:
df3 = pd.concat(csvs, axis=0, ignore_index=True)# First we concat df1 & df2 which is the appending of the CSV's
# Note this is a simulation of your csv's
df3 = pd.concat([df1,df2], ignore_index=True)
print(df3)
chunk timecodes chunk_completed chunk_id diffs_avg sd
0 [60 62] [100, 200] 500 60-62 2 1
1 [58 53] [800, 900] 1000 58-53 4 6
2 [60 62] [200, 400] 1000 60-62 4 2
3 [30 33] [200, 700] 800 30-33 6 7现在我们可以用groupby聚合
df_grouped = df3.groupby('chunk_id').agg({'chunk_completed':'mean',
'diffs_avg':'mean',
'sd':'mean'}).reset_index()
print(df_grouped)
chunk_id chunk_completed diffs_avg sd
0 30-33 800 6 7.0
1 58-53 1000 4 6.0
2 60-62 750 3 1.5https://stackoverflow.com/questions/55354742
复制相似问题