首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >运行简单的rhadoop作业时出现问题-管道损坏错误

运行简单的rhadoop作业时出现问题-管道损坏错误
EN

Stack Overflow用户
提问于 2012-12-20 03:14:49
回答 1查看 2.3K关注 0票数 3

我有一个安装了rmr2和rhdfs包的hadoop集群设置。我已经能够通过CLI和rscripts运行一些示例MR作业。例如,这是可行的:

代码语言:javascript
复制
#!/usr/bin/env Rscript
require('rmr2')

small.ints = to.dfs(1:1000)
out = mapreduce( input = small.ints, map = function(k, v) keyval(v, v^2))
df = as.data.frame( from.dfs( out) )
colnames(df) = c('n', 'n2')
str(df)

最终输出:

代码语言:javascript
复制
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

'data.frame':   1000 obs. of  2 variables:
 $ n : int  1 2 3 4 5 6 7 8 9 10 ...
 $ n2: num  1 4 9 16 25 36 49 64 81 100 ...

我现在正试图进入下一步,编写我自己的MR工作。我有一个包含一些击球统计数据的文件(`/user/michael/batsmall.csv'):

代码语言:javascript
复制
aardsda01,2004,1,SFN,NL,11,11,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,11
aardsda01,2006,1,CHN,NL,45,43,2,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,45
aardsda01,2007,1,CHA,AL,25,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2
aardsda01,2008,1,BOS,AL,47,5,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,5
aardsda01,2009,1,SEA,AL,73,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
aardsda01,2010,1,SEA,AL,53,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0

(batsmall.csv是一个大得多的文件的摘录,但实际上我只是想证明我可以从hdfs读取和分析文件)

下面是我的脚本:

代码语言:javascript
复制
#!/usr/bin/env Rscript

require('rmr2');
require('rhdfs');

hdfs.init()
hdfs.rmr("/user/michael/rMean")

findMean = function (input, output) {
  mapreduce(input = input,
            output = output,
            input.format = 'csv',
            map = function(k, fields) {
              myField <- fields[[5]]
              keyval(fields[[0]], myField)
            },
            reduce = function(key, vv) {
              keyval(key, mean(as.numeric(vv)))
            }
    )
}

from.dfs(findMean("/home/michael/r/Batting.csv", "/home/michael/r/rMean"))
print(hdfs.read.text.file("/user/michael/batsmall.csv"))

每次都会失败,查看hadoop日志,它似乎是一个管道损坏的错误。我不知道是什么导致了这一切。当其他作业工作时,我会认为这是我的脚本的问题,而不是我的配置,但我不能弄清楚。诚然,我是hadoop的新手,也是相对较新的新手。

以下是作业输出:

代码语言:javascript
复制
[michael@hadoop01 r]$ ./rtest.r
Loading required package: rmr2
Loading required package: Rcpp
Loading required package: RJSONIO
Loading required package: methods
Loading required package: digest
Loading required package: functional
Loading required package: stringr
Loading required package: plyr
Loading required package: rhdfs
Loading required package: rJava

HADOOP_CMD=/usr/bin/hadoop

Be sure to run hdfs.init()
Deleted hdfs://hadoop01.dev.terapeak.com/user/michael/rMean
[1] TRUE
packageJobJar: [/tmp/Rtmp2XnCL3/rmr-local-env55d1533355d7, /tmp/Rtmp2XnCL3/rmr-global-env55d119877dd3, /tmp/Rtmp2XnCL3/rmr-streaming-map55d13c0228b7, /tmp/Rtmp2XnCL3/rmr-streaming-reduce55d150f7ffa8, /tmp/hadoop-michael/hadoop-unjar5464463427878425265/] [] /tmp/streamjob4293464845863138032.jar tmpDir=null
12/12/19 11:09:41 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
12/12/19 11:09:41 INFO mapred.FileInputFormat: Total input paths to process : 1
12/12/19 11:09:42 INFO streaming.StreamJob: getLocalDirs(): [/tmp/hadoop-michael/mapred/local]
12/12/19 11:09:42 INFO streaming.StreamJob: Running job: job_201212061720_0039
12/12/19 11:09:42 INFO streaming.StreamJob: To kill this job, run:
12/12/19 11:09:42 INFO streaming.StreamJob: /usr/lib/hadoop/bin/hadoop job  -Dmapred.job.tracker=hadoop01.dev.terapeak.com:8021 -kill job_201212061720_0039
12/12/19 11:09:42 INFO streaming.StreamJob: Tracking URL: http://hadoop01.dev.terapeak.com:50030/jobdetails.jsp?jobid=job_201212061720_0039
12/12/19 11:09:43 INFO streaming.StreamJob:  map 0%  reduce 0%
12/12/19 11:10:15 INFO streaming.StreamJob:  map 100%  reduce 100%
12/12/19 11:10:15 INFO streaming.StreamJob: To kill this job, run:
12/12/19 11:10:15 INFO streaming.StreamJob: /usr/lib/hadoop/bin/hadoop job  -Dmapred.job.tracker=hadoop01.dev.terapeak.com:8021 -kill job_201212061720_0039
12/12/19 11:10:15 INFO streaming.StreamJob: Tracking URL: http://hadoop01.dev.terapeak.com:50030/jobdetails.jsp?jobid=job_201212061720_0039
12/12/19 11:10:15 ERROR streaming.StreamJob: Job not successful. Error: NA
12/12/19 11:10:15 INFO streaming.StreamJob: killJob...
Streaming Command Failed!
Error in mr(map = map, reduce = reduce, combine = combine, in.folder = if (is.list(input)) { :
  hadoop streaming failed with error code 1
Calls: findMean -> mapreduce -> mr
Execution halted

以及来自作业跟踪器的示例异常:

代码语言:javascript
复制
ava.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
    at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:362)
    at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:572)
    at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:136)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
    at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:393)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:327)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
    at org.apache.hadoop.mapred.Child.main(Child.java:262)
EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2012-12-20 03:30:05

您需要检查失败尝试的stderr。jobtracker web UI是实现这一目标的最简单方法。有根据的猜测是,fields是一个数据框,您可以像访问列表一样访问它,这是可能的,但不同寻常。错误可能间接地由此而来。

此外,我们还有一份关于RHadoop wiki的调试文档,里面有很多建议。

最后,我们有一个专用的RHadoop google group,您可以在其中与大量热情的用户进行交互。或者你可以靠你自己。

票数 3
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/13959490

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档