我是MapReduce的新手。这是我的问题。
在hdfs://centmaster/input目录中,我有两个文件:
file1.txt:
2012-3-1 a
2012-3-2 b
2012-3-3 c
2012-3-4 d
2012-3-5 a
2012-3-6 b
2012-3-7 c
2012-3-3 c和file2.txt:
2012-3-1 b
2012-3-2 a
2012-3-3 b
2012-3-4 d
2012-3-5 a
2012-3-6 c
2012-3-7 d
2012-3-3 c我运行重复数据消除MapReduce代码:
package Hadoop_for_jar;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
public class Dedup2 {
public static class Map extends Mapper<Object,Text,Text,Text>{
private static Text line=new Text();
public void map(Object key,Text value,Context context)
throws IOException,InterruptedException{
line=value;
context.write(line, new Text(""));
}
}
public static class Reduce extends Reducer<Text,Text,Text,Text>{
public void reduce(Text key,Iterable<Text> values,Context context)
throws IOException,InterruptedException{
context.write(key, new Text(""));
}
}
public static void main(String[] args) throws Exception{
Configuration conf = new Configuration();
System.setProperty("HADOOP_USER_NAME", "root");
String[] otherArgs = {"hdfs://centmaster:9000/input", "hdfs://centmaster:9000/output/debup1"};
Job job = new Job(conf, "Data Deduplication");
job.setJarByClass(Dedup.class);
job.setMapperClass(Map.class);
job.setCombinerClass(Reduce.class);
job.setReducerClass(Reduce.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}它运行成功。结果是:
2012-3-1 a
2012-3-1 b
2012-3-2 a
2012-3-2 b
2012-3-3 b
2012-3-3 c
2012-3-4 d
2012-3-5 a
2012-3-6 b
2012-3-6 c
2012-3-7 c
2012-3-7 d现在,我在想,Reducer没有使用Reducer的input (Key,Value)的值,这是Mapper的输出。
这对这个程序是没用的。我想把文本改成IntWritable,我希望它能得到同样的结果。所以,我
进行了以下更改:
package Hadoop_for_jar;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
public class Dedup2 {
public static class Map extends Mapper<Object,Text,Text,IntWritable>{ //Change Text to IntWritable
private static Text line=new Text();
public void map(Object key,Text value,Context context)
throws IOException,InterruptedException{
line=value;
context.write(line, new IntWritable(0)); //Change Text("") to IntWritable(0)
}
}
public static class Reduce extends Reducer<Text,IntWritable,Text,Text>{ //Change Text to IntWritable
public void reduce(Text key,Iterable<IntWritable> values,Context context) //Change Text to IntWritable
throws IOException,InterruptedException{
context.write(key, new Text(""));
}
}
public static void main(String[] args) throws Exception{
Configuration conf = new Configuration();
System.setProperty("HADOOP_USER_NAME", "root");
String[] otherArgs = {"hdfs://centmaster:9000/input", "hdfs://centmaster:9000/output/debup2"};
Job job = new Job(conf, "Data Deduplication");
job.setJarByClass(Dedup2.class);
job.setMapperClass(Map.class);
job.setCombinerClass(Reduce.class);
job.setReducerClass(Reduce.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}还有/output/debup2目录。但是目录是空的。我的假设是错误的。所以,我的问题是: Reducer()是否只接受(Text,Text)输入?谢谢!
发布于 2014-10-23 08:44:30
问题就解决了。根据下面的链接,Mapper和Reducer的输出应该是相同的。如果你想设置Mapper和Reducer的不同输出,应该避免使用Combiner。
Error: java.io.IOException: wrong value class: class org.apache.hadoop.io.Text is not class Myclass
https://stackoverflow.com/questions/26501594
复制相似问题