Java – Run multiple MapReduce jobs in hadoop

Run multiple MapReduce jobs in hadoop… here is a solution to the problem.

Run multiple MapReduce jobs in hadoop

I want to run a series of map reduce jobs, so the simplest solution seems to be jobcontroller. Let’s say I have two jobs, job1 and job2. I want to run job2 after job1. Well, it ran into some issues. After hours of debugging, I narrowed the code down to the following lines:

JobConf jobConf1 = new JobConf();  
JobConf jobConf2 = new JobConf();  
System.out.println("*** Point 1");
Job job1 = new Job(jobConf1);  
System.out.println("*** Point 2");
Job job2 = new Job(jobConf2);
System.out.println("*** Point 3");

I keep getting this output when I run the code :

*** Point 1    
10/12/06 17:19:30 INFO jvm. JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
 Point 2    
10/12/06 17:19:30 INFO jvm. JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
 Point 3

I’m guessing my question has something to do with the “Unable to initialize JMV…” line. What is that? And how do I instantiate multiple jobs so that they are passed to the JobController.

When I add job1.waitForTheCompletion(true) before initializing the second job, it gives me this error:

    10/12/07 11:28:21 INFO jvm. JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
Exception in thread "main" org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/home/workspace/WikipediaSearch/__TEMP1
        at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:224)
        at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:241)
        at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:885)
        at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:779)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
        at ch.ethz.nis.query.HadoopQuery.run(HadoopQuery.java:353)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
        at ch.ethz.nis.query.HadoopQuery.main(HadoopQuery.java:308)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

__Temp1 is the output folder of the first job, which I want to use as input to the second job. Even though I have this waitForCompletion line in my code, it still says that this path doesn’t exist.

Solution

Wowww, which took two days to debug, turned out to be a problem with Hadoop’s internal directory naming conventions. Ostensibly, for input or output map-reduce directories, you cannot choose names that begin with an underscore “_”. That idiot!
Warnings and errors don’t help at all.

Related Problems and Solutions