我试图按以下方式进行嵌套的groupby:
>>> df1 = pd.DataFrame({'Date': {0: '2016-10-11', 1: '2016-10-11', 2: '2016-10-11', 3: '2016-10-11', 4: '2016-10-11',5: '2016-10-12'}, 'Stock': {0: 'ABC', 1: 'ABC', 2: 'ABC', 3: 'ABC', 4: 'ABC', 5: 'XYZ'}, 'Quantity': {0: 60,1: 50, 2: 40, 3: 30, 4: 20, 5: 10}, 'UiD':{0:1,1:1,2:1,3:2,4:2,5:3}, 'StartTime': {0: '08:00:00.241', 1: '08:00:00.243', 2: '12:34:23.563', 3: '08:14.05.908', 4: '18:54:50.100', 5: '10:08:36.657'}, 'Sign':{0:1,1:1,2:0,3:-1,4:0,5:-1}, 'leg1':{0:2,1:2,2:4,3:5,4:7,5:8}})
>>> df1
Date Quantity Sign StartTime Stock UiD leg1
0 2016-10-11 60 1 08:00:00.241 ABC 1 2
1 2016-10-11 50 1 08:00:00.243 ABC 1 2
2 2016-10-11 40 0 12:34:23.563 ABC 1 4
3 2016-10-11 30 -1 08:14.05.908 ABC 2 5
4 2016-10-11 20 0 18:54:50.100 ABC 2 7
5 2016-10-12 10 -1 10:08:36.657 XYZ 3 8
>>> dfg1=df1.groupby(['Date','Stock'])
>>> dfg1.apply(lambda x:x.groupby('UiD').first()).groupby(['Date','Stock']).apply(lambda x:np.sum(x['Quantity']))
Date Stock
2016-10-11 ABC 90
2016-10-12 XYZ 10
dtype: int64
>>>
>>> dfg1['leg1'].sum()
Date Stock
2016-10-11 ABC 20
2016-10-12 XYZ 8
Name: leg1, dtype: int64到目前一切尚好。现在,我试图将这两个结果连接到一个新的DataFrame df2中,如下所示:
>>> df2 = pd.concat([dfg1['leg1'].sum(), dfg1.apply(lambda x:x.groupby('UiD').first()).groupby(['Date','Stock']).apply(lambda x:np.sum(x['Quantity']))],axis=1)
0 1
Date Stock
2016-10-11 ABC 20 90
2016-10-12 XYZ 8 10
>>>我想知道是否有更好的方法重写下面的行,以避免重复groupby(['Date','Stock'])。
dfg1.apply(lambda x:x.groupby('UiD').first()).groupby(['Date','Stock']).apply(lambda x:np.sum(x['Quantity']))如果['Date','Stock']包含'UiD'作为键之一,或者['Date','Stock']仅被['UiD']替换,则此操作也将失败。
发布于 2018-02-16 07:26:31
请重述你的问题,使之更清楚。你想要groupby(['Date','Stock']),那么:
无论如何,您想要在多个列上执行聚合(sum),而避免groupby('Date','Stock')重复的方法是保留一个数据,而不是试图从两个单独的聚合操作中将两个数据文件拼接在一起。如下所示(一旦您确认这是您想要的,我将修复它):
def filter_first_UiD(g):
#return g.groupby('UiD').first().agg(np.sum)
return g.groupby('UiD').first().agg({'Quantity':'sum', 'leg1':'sum'})
df1.groupby(['Date','Stock']).apply(filter_first_UiD)发布于 2016-11-09 09:47:09
如果['Date','Stock']包含'UiD'作为键之一,或者如果['Date','Stock']被['UiD']替换,那么我处理最后一个避免groupby失败的场景的方式如下:
>>> df2 = pd.concat([dfg1['leg1'].sum(), dfg1[].first() if 'UiD' in `['Date','Stock']` else dfg1.apply(lambda x:x.groupby('UiD').first()).groupby(['Date','Stock']).apply(lambda x:np.sum(x['Quantity']))],axis=1)但更优雅的解决方案仍是一个悬而未决的问题。
https://stackoverflow.com/questions/40493024
复制相似问题