我正在运行一个简单的map-reduce作业。此作业使用普通爬网数据中的250个文件。
例如s3://aws-publicdatasets/common-crawl/parse-output/segment/1341690169105/
如果我使用50,100个文件,一切工作正常。但是对于250个文件,我得到了这个错误
java.io.IOException: Attempted read from closed stream.
at org.apache.commons.httpclient.ContentLengthInputStream.read(ContentLengthInputStream.java:159)
at java.io.FilterInputStream.read(FilterInputStream.java:116)
at org.apache.commons.httpclient.AutoCloseInputStream.read(AutoCloseInputStream.java:107)
at org.jets3t.service.io.InterruptableInputStream.read(InterruptableInputStream.java:76)
at org.jets3t.service.impl.rest.httpclient.HttpMethodReleaseInputStream.read(HttpMethodReleaseInputStream.java:136)
at org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.read(NativeS3FileSystem.java:111)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
at java.io.DataInputStream.readByte(DataInputStream.java:248)
at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:299)
at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:320)
at org.apache.hadoop.io.SequenceFile$Reader.readBuffer(SequenceFile.java:1707)
at org.apache.hadoop.io.SequenceFile$Reader.seekToCurrentValue(SequenceFile.java:1773)
at org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.java:1849)
at org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.nextKeyValue(SequenceFileRecordReader.java:74)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:532)
at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
at org.apache.hadoop.mapreduce.lib.map.MultithreadedMapper$SubMapRecordReader.nextKeyValue(MultithreadedMapper.java:180)
at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
at org.apache.hadoop.mapreduce.lib.map.MultithreadedMapper$MapRunner.run(MultithreadedMapper.java:268)有什么线索吗?
发布于 2013-01-08 12:06:45
你有多少个映射槽来处理输入?接近100了吗?
这只是一种猜测,但也有可能是在处理第一批文件时,与S3的连接超时,并且当插槽可用于处理更多文件时,连接不再打开。我相信来自NativeS3FileSystem的超时错误会显示为IOExceptions。
https://stackoverflow.com/questions/14203621
复制相似问题