我正在学习Hadoop,这个问题已经困扰了我一段时间。基本上,我是写一个SequenceFile到磁盘,然后读取它回来。然而,每次我在阅读时都会得到一个EOFException。更深入的研究表明,当写入序列文件时,它会被过早地截断,并且总是发生在写入索引962之后,并且文件的固定大小总是45056字节。
我在MacBook Pro上使用Java8和Hadoop2.5.1。事实上,我在Java 7下的另一台Linux机器上尝试了相同的代码,但同样的事情也发生了。
我可以排除作者/读者没有恰当地关闭。我尝试使用代码中所示的显式writer.close()来使用旧的样式try/catch,还使用了新的使用资源的尝试方法。两者都不起作用。
任何帮助都将不胜感激。
下面是我使用的代码:
public class SequenceFileDemo {
private static final String[] DATA = { "One, two, buckle my shoe",
"Three, four, shut the door",
"Five, six, pick up sticks",
"Seven, eight, lay them straight",
"Nine, ten, a big fat hen" };
public static void main(String[] args) throws Exception {
String uri = "file:///Users/andy/Downloads/puzzling.seq";
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(URI.create(uri), conf);
Path path = new Path(uri);
IntWritable key = new IntWritable();
Text value = new Text();
//API change
try {
SequenceFile.Writer writer = SequenceFile.createWriter(conf,
stream(fs.create(path)),
keyClass(IntWritable.class),
valueClass(Text.class));
for ( int i = 0; i < 1024; i++ ) {
key.set( i);
value.clear();
value.set(DATA[i % DATA.length]);
writer.append(key, value);
if ( (i-1) %100 == 0 ) writer.hflush();
System.out.printf("[%s]\t%s\t%s\n", writer.getLength(), key, value);
}
writer.close();
} catch (Exception e ) {
e.printStackTrace();
}
try {
SequenceFile.Reader reader = new SequenceFile.Reader(conf,
SequenceFile.Reader.file(path));
Class<?> keyClass = reader.getKeyClass();
Class<?> valueClass = reader.getValueClass();
boolean isWritableSerilization = false;
try {
keyClass.asSubclass(WritableComparable.class);
isWritableSerilization = true;
} catch (ClassCastException e) {
}
if ( isWritableSerilization ) {
WritableComparable<?> rKey = (WritableComparable<?>) ReflectionUtils.newInstance(keyClass, conf);
Writable rValue = (Writable) ReflectionUtils.newInstance(valueClass, conf);
while(reader.next(rKey, rValue)) {
System.out.printf("[%s] %d %s=%s\n",reader.syncSeen(), reader.getPosition(), rKey, rValue);
}
} else {
//make sure io.seraizliatons has the serialization in use when write the sequence file
}
reader.close();
} catch(IOException e) {
e.printStackTrace();
}
}
}发布于 2015-01-13 11:50:17
我实际上发现了错误,这是因为您从未关闭在Writer.stream(fs.create(path))中创建的流。
由于某些原因,关闭没有传播到您刚才在那里创建的流。我想这是个bug,但我现在太懒了,无法在Jira查找它。
解决问题的一种方法是简单地使用Writer.file(path)。
显然,您也可以显式地关闭create流。下面是我更正的示例:
Path path = new Path("file:///tmp/puzzling.seq");
try (FSDataOutputStream stream = fs.create(path)) {
try (SequenceFile.Writer writer = SequenceFile.createWriter(conf, Writer.stream(stream),
Writer.keyClass(IntWritable.class), Writer.valueClass(NullWritable.class))) {
for (int i = 0; i < 1024; i++) {
writer.append(new IntWritable(i), NullWritable.get());
}
}
}
try (SequenceFile.Reader reader = new SequenceFile.Reader(conf, Reader.file(path))) {
Class<?> keyClass = reader.getKeyClass();
Class<?> valueClass = reader.getValueClass();
WritableComparable<?> rKey = (WritableComparable<?>) ReflectionUtils.newInstance(keyClass, conf);
Writable rValue = (Writable) ReflectionUtils.newInstance(valueClass, conf);
while (reader.next(rKey, rValue)) {
System.out.printf("%s = %s\n", rKey, rValue);
}
}发布于 2015-01-13 07:47:51
我认为您在写循环之后缺少了writer.close()。在你开始阅读之前,这应该是最后一次冲水了。
发布于 2015-01-14 02:25:57
多亏托马斯。
归根结底,作者是否创造了“拥有”的“不”流。在创建写入器时,如果传入选项Writer.file(path),,则写入器“拥有”内部创建的底层流,并在调用close()时关闭它。然而,如果我们传入Writer.stream(aStream),,则作者假设其他人是该流的响应,并且在调用close()时不会关闭它。总之,它不是一个bug,只是我不太理解它。。
https://stackoverflow.com/questions/27916872
复制相似问题