首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >reduce()只能接受reduce (Text,Text)?

reduce()只能接受reduce (Text,Text)?
EN

Stack Overflow用户
提问于 2014-10-22 14:36:46
回答 1查看 295关注 0票数 0

我是MapReduce的新手。这是我的问题。

在hdfs://centmaster/input目录中,我有两个文件:

file1.txt:

代码语言:javascript
复制
2012-3-1 a
2012-3-2 b
2012-3-3 c
2012-3-4 d
2012-3-5 a
2012-3-6 b
2012-3-7 c
2012-3-3 c

和file2.txt:

代码语言:javascript
复制
2012-3-1 b
2012-3-2 a
2012-3-3 b
2012-3-4 d
2012-3-5 a
2012-3-6 c
2012-3-7 d
2012-3-3 c

我运行重复数据消除MapReduce代码:

代码语言:javascript
复制
package Hadoop_for_jar;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;

public class Dedup2 {

    public static class Map extends Mapper<Object,Text,Text,Text>{
        private static Text line=new Text();
        public void map(Object key,Text value,Context context)
                throws IOException,InterruptedException{
            line=value;
            context.write(line, new Text(""));
        }

    }

    public static class Reduce extends Reducer<Text,Text,Text,Text>{
        public void reduce(Text key,Iterable<Text> values,Context context)
                throws IOException,InterruptedException{
            context.write(key, new Text(""));
        }
    }

    public static void main(String[] args) throws Exception{
        Configuration conf = new Configuration();
        System.setProperty("HADOOP_USER_NAME", "root");
        String[] otherArgs = {"hdfs://centmaster:9000/input", "hdfs://centmaster:9000/output/debup1"};
        Job job = new Job(conf, "Data Deduplication");
        job.setJarByClass(Dedup.class);

        job.setMapperClass(Map.class);
        job.setCombinerClass(Reduce.class);
        job.setReducerClass(Reduce.class);

        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(Text.class);

        FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
        FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
        System.exit(job.waitForCompletion(true) ? 0 : 1);
     }
}

它运行成功。结果是:

代码语言:javascript
复制
2012-3-1 a
2012-3-1 b
2012-3-2 a
2012-3-2 b
2012-3-3 b
2012-3-3 c
2012-3-4 d
2012-3-5 a
2012-3-6 b
2012-3-6 c
2012-3-7 c
2012-3-7 d

现在,我在想,Reducer没有使用Reducer的input (Key,Value)的值,这是Mapper的输出。

这对这个程序是没用的。我想把文本改成IntWritable,我希望它能得到同样的结果。所以,我

进行了以下更改:

代码语言:javascript
复制
package Hadoop_for_jar;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;

public class Dedup2 {

    public static class Map extends Mapper<Object,Text,Text,IntWritable>{   //Change Text to IntWritable
        private static Text line=new Text();
        public void map(Object key,Text value,Context context)
                throws IOException,InterruptedException{
            line=value;
            context.write(line, new IntWritable(0));    //Change Text("") to IntWritable(0)
        }

    }

    public static class Reduce extends Reducer<Text,IntWritable,Text,Text>{ //Change Text to IntWritable
        public void reduce(Text key,Iterable<IntWritable> values,Context context)   //Change Text to IntWritable
                throws IOException,InterruptedException{
            context.write(key, new Text(""));
        }

    }

    public static void main(String[] args) throws Exception{
        Configuration conf = new Configuration();
        System.setProperty("HADOOP_USER_NAME", "root");
        String[] otherArgs = {"hdfs://centmaster:9000/input", "hdfs://centmaster:9000/output/debup2"};
        Job job = new Job(conf, "Data Deduplication");
        job.setJarByClass(Dedup2.class);

        job.setMapperClass(Map.class);
        job.setCombinerClass(Reduce.class);
        job.setReducerClass(Reduce.class);

        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(Text.class);

        FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
        FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
        System.exit(job.waitForCompletion(true) ? 0 : 1);
     }
}

还有/output/debup2目录。但是目录是空的。我的假设是错误的。所以,我的问题是: Reducer()是否只接受(Text,Text)输入?谢谢!

EN

回答 1

Stack Overflow用户

发布于 2014-10-23 08:44:30

问题就解决了。根据下面的链接,Mapper和Reducer的输出应该是相同的。如果你想设置Mapper和Reducer的不同输出,应该避免使用Combiner。

Error: java.io.IOException: wrong value class: class org.apache.hadoop.io.Text is not class Myclass

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/26501594

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档