1.表1和Table2是相关的,其中table1 px组合的时间条目显示在tabel2中。我需要最后一次进入每个px组合。如何使用蜂巢来实现这一点?预期产出如下所示,以供参考。
px1 coo1
px1 coo2
px1 coo3
px2 coo2
px2 coo4
px3 coo3
px4 coo4Table2
id1 2014-01-01 21:23:23,273 px1 coo1
id2 2014-01-01 22:01:22,377 px1 coo1
id3 2014-01-01 22:25:06,196 px1 coo1
id4 2014-01-01 22:51:39,487 px1 coo1
id5 2014-01-01 02:05:57,875 px1 coo2
id6 2014-01-01 02:09:42,675 px1 coo2
id7 2014-01-01 23:19:42,059 px1 coo3
id8 2014-01-01 23:34:51,782 px1 coo3
id9 2014-01-01 06:13:05,531 px2 coo2
id10 2014-01-01 06:27:36,676 px2 coo2
id11 2014-01-01 06:59:43,999 px2 coo2
id12 2014-01-01 09:21:57,325 px3 coo3
id13 2014-01-01 17:19:06,956 px4 coo4
id14 2014-01-01 17:27:05,128 px4 coo4预期的产出应该是
id4 2014-01-01 22:51:39,487 px1 coo1
id6 2014-01-01 02:09:42,675 px1 coo2
id8 2014-01-01 23:34:51,782 px1 coo3
id11 2014-01-01 06:59:43,999 px2 coo2
id12 2014-01-01 09:21:57,325 px3 coo3
id14 2014-01-01 17:27:05,128 px4 coo4发布于 2014-01-29 17:18:49
假设您的table2,最后的列将与table2一致。(我的意思是,在表2本身上,您可以得到pix_id的结果,coo_id在table2中将是正确的。)如果我的假设是错误的,请原谅。
hive (sflow)> desc table2;
OK
col_name data_type comment
id string from deserializer
time_stamp string from deserializer
pix_id string from deserializer
coo_id string from deserializer
Time taken: 0.277 seconds蜂箱(流动)>
SELECT t2.id,t2.time_stamp,t2.pix_id,t2.coo_id
FROM table2 t2 JOIN
( SELECT pix_id,coo_id, Max(UNIX_TIMESTAMP(time_stamp)) as max_epoch
FROM table2
GROUP BY pix_id,coo_id) temp
WHERE t2.pix_id=temp.pix_id AND t2.coo_id=temp.coo_id AND UNIX_TIMESTAMP(t2.time_stamp) = max_epoch ;ps:在这里,通过复制完整的日志(请注意,我正在运行伪模式hadoop,hive0.9,2GB RAM ):
hive (sflow)> from table2 t2 join (select pix_id,coo_id, Max(UNIX_TIMESTAMP(time_stamp)) as max_epoch from table2 group by pix_id,coo_id) temp
> select t2.id,t2.time_stamp,t2.pix_id,t2.coo_id where t2.pix_id=temp.pix_id and t2.coo_id=temp.coo_id and UNIX_TIMESTAMP(t2.time_stamp) = max_epoch ;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 1
Total MapReduce CPU Time Spent: 24 seconds 0 msec
OK
id time_stamp pix_id coo_id
id4 2014-01-01 22:51:39,487 px1 coo1
id6 2014-01-01 02:09:42,675 px1 coo2
id8 2014-01-01 23:34:51,782 px1 coo3
id11 2014-01-01 06:59:43,999 px2 coo2
id12 2014-01-01 09:21:57,325 px3 coo3
id14 2014-01-01 17:27:05,128 px4 coo4
Time taken: 145.17 seconds
hive (sflow)>
hive (sflow)> desc table2;
OK
col_name data_type comment
id string from deserializer
time_stamp string from deserializer
pix_id string from deserializer
coo_id string from deserializer
Time taken: 0.277 seconds
hive (sflow)>发布于 2014-01-29 18:46:31
您可以使用Brickhouse ( http://github.com/klout/brickhouse )的UDF生成数据,只需一个作业步骤。
select array_index( map_keys( max_map ), 0) as id,
from_unixtime( array_index( map_values( max_map), 0) as time_stamp,
pix_id,
coo_id
from (
select pix_id, coo_id,
collect_max( id, unix_timestamp(time_stamp) ) as max_map
from table2
group by pix_id, coo_id ) cm ;对于小型数据集来说,这并不重要,但是对于非常大的数据集,它允许您只使用一次数据传递就可以解决问题。
https://stackoverflow.com/questions/21422560
复制相似问题