Java – Error :(63, 40) java : incompatible types: org. apache.hadoop.mapreduce.Job cannot be converted to org.apache.hadoop.mapred.JobConf

Error :(63, 40) java : incompatible types: org. apache.hadoop.mapreduce.Job cannot be converted to org.apache.hadoop.mapred.JobConf… here is a solution to the problem.

Error :(63, 40) java : incompatible types: org. apache.hadoop.mapreduce.Job cannot be converted to org.apache.hadoop.mapred.JobConf

I just ran a simple hadooop program in the intellj IDE. But when I try to compile I get an error

$Error:(63, 40) java: incompatible types:
org.apache.hadoop.mapreduce.Job cannot be converted to
org.apache.hadoop.mapred.JobConf

Here is the code for this little program of mine :

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;

import java.io.IOException;
import java.util.StringTokenizer;

public class WordCount {

public static class TokenizerMapper
        extends Mapper<Object, Text, Text, IntWritable> {

private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();

public void map(Object key, Text value, Context context
    ) throws IOException, InterruptedException {
        StringTokenizer itr = new StringTokenizer(value.toString());
        while (itr.hasMoreTokens()) {
            word.set(itr.nextToken());
            context.write(word, one);
        }
    }
}

public static class IntSumReducer
        extends Reducer<Text, IntWritable, Text, IntWritable> {
    private IntWritable result = new IntWritable();

public void reduce(Text key, Iterable<IntWritable> values,
                       Context context
    ) throws IOException, InterruptedException {
        int sum = 0;
        for (IntWritable val : values) {
            sum += val.get();
        }
        result.set(sum);
        context.write(key, result);
    }
}

public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();
    Job job = Job.getInstance(conf, "word count");
    job.setJarByClass(WordCount.class);
    job.setMapperClass(TokenizerMapper.class);
    job.setCombinerClass(IntSumReducer.class);
    job.setReducerClass(IntSumReducer.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);
    FileInputFormat.addInputPath(job, new Path(args[0]));
    FileOutputFormat.setOutputPath(job, new Path(args[1]));
    System.exit(job.waitForCompletion(true) ? 0 : 1);
 }
}

Solution

Root cause:

You used new API (maprecude) uses the JobConf object as the first parameter, and the Job object as the first parameter.
(Jobs are also from the new API, and JobConf is from the old API.)


Solution:

Change this line in the code:

import org.apache.hadoop.mapred.FileOutputFormat;

to

import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

Related Problems and Solutions