为了使performance issues与Amazon一起工作,我尝试使用s3distcp将文件从S3复制到我的EMR集群中进行本地处理。作为第一个测试,我将从一个目录复制超过一天的数据,2160个文件,使用--groupBy选项将它们折叠到一个(或几个)文件中。
任务似乎运行得很好,给我显示了地图/减少到100%,但是此时进程挂起,再也不会回来了。我怎么才能弄清楚到底怎么回事?
源文件是存储在GZipped文本文件中的S3,每个文本文件约30 in。这是一个普通的Amazon集群,我从主节点的外壳运行s3distcp。
hadoop@ip-xxx:~$ hadoop jar /home/hadoop/lib/emr-s3distcp-1.0.jar --src s3n://xxx/click/20140520 --dest hdfs:////data/click/20140520 --groupBy ".*(20140520).*" --outputCodec lzo
14/05/21 20:06:32 INFO s3distcp.S3DistCp: Running with args: [Ljava.lang.String;@26f3bbad
14/05/21 20:06:35 INFO s3distcp.S3DistCp: Using output path 'hdfs:/tmp/9f423c59-ec3a-465e-8632-ae449d45411a/output'
14/05/21 20:06:35 INFO s3distcp.S3DistCp: GET http://169.254.169.254/latest/meta-data/placement/availability-zone result: us-west-2b
14/05/21 20:06:35 INFO s3distcp.S3DistCp: Created AmazonS3Client with conf KeyId AKIAJ5KT6QSV666K6KHA
14/05/21 20:06:37 INFO s3distcp.FileInfoListing: Opening new file: hdfs:/tmp/9f423c59-ec3a-465e-8632-ae449d45411a/files/1
14/05/21 20:06:38 INFO s3distcp.S3DistCp: Created 1 files to copy 2160 files
14/05/21 20:06:38 INFO mapred.JobClient: Default number of map tasks: null
14/05/21 20:06:38 INFO mapred.JobClient: Setting default number of map tasks based on cluster size to : 72
14/05/21 20:06:38 INFO mapred.JobClient: Default number of reduce tasks: 3
14/05/21 20:06:39 INFO security.ShellBasedUnixGroupsMapping: add hadoop to shell userGroupsCache
14/05/21 20:06:39 INFO mapred.JobClient: Setting group to hadoop
14/05/21 20:06:39 INFO mapred.FileInputFormat: Total input paths to process : 1
14/05/21 20:06:39 INFO mapred.JobClient: Running job: job_201405211343_0031
14/05/21 20:06:40 INFO mapred.JobClient: map 0% reduce 0%
14/05/21 20:06:53 INFO mapred.JobClient: map 1% reduce 0%
14/05/21 20:06:56 INFO mapred.JobClient: map 4% reduce 0%
14/05/21 20:06:59 INFO mapred.JobClient: map 36% reduce 0%
14/05/21 20:07:00 INFO mapred.JobClient: map 44% reduce 0%
14/05/21 20:07:02 INFO mapred.JobClient: map 54% reduce 0%
14/05/21 20:07:05 INFO mapred.JobClient: map 86% reduce 0%
14/05/21 20:07:06 INFO mapred.JobClient: map 94% reduce 0%
14/05/21 20:07:08 INFO mapred.JobClient: map 100% reduce 10%
14/05/21 20:07:11 INFO mapred.JobClient: map 100% reduce 19%
14/05/21 20:07:14 INFO mapred.JobClient: map 100% reduce 27%
14/05/21 20:07:17 INFO mapred.JobClient: map 100% reduce 29%
14/05/21 20:07:20 INFO mapred.JobClient: map 100% reduce 100%
[hangs here]这项工作表明:
hadoop@xxx:~$ hadoop job -list
1 job currently running
JobId State StartTime UserName Priority SchedulingInfo
job_201405211343_0031 1 1400702799339 hadoop NORMAL NA目标HDFS目录中没有任何内容:
hadoop@xxx:~$ hadoop dfs -ls /data/click/有什么想法吗?
发布于 2014-09-03 07:15:07
hadoop @ip:~$hadoop/home/hadoop/lib/emr-s3discp-1.0.jar-src s3n://xxx/click/20140520*-hdfs:/data/click/20140520*-groupBy“(20140520)
我也面临着一个类似的问题。我所需要的只是在目录的末尾加一个斜杠。因此,它完成了,并且显示了数据,使它保持在100%的水平上。
发布于 2014-09-11 21:13:11
使用s3://代替s3n。
hadoop /home/hadoop/lib/emr-s3discp-1.0.jar --src s3://xxx/click/20140520 --hdfs:/data/click/20140520-groupBy“(20140520)。
https://stackoverflow.com/questions/23793026
复制相似问题