Java – How do I tell MapReduce how many mappers to use?

How do I tell MapReduce how many mappers to use?… here is a solution to the problem.

How do I tell MapReduce how many mappers to use?

I’m trying to speed up optimizing a MapReduce job.

Is there any way to get Hadoop to use a specific number of Mapper/Reducer processes? Or, at least, the minimum number of mapper processes?

In the documentation, it is specified, and you can do this with this method

public void setNumMapTasks(int n)

JobConf class.

This approach is not obsolete, so I started the Job with the Job class. What is the right way to do this?

Solution

The number of

map tasks is determined by the number of blocks in the input. If the input file is 100MB and the HDFS block size is 64MB, the input file will occupy 2 blocks. Therefore, 2 map tasks will be generated. JobConf.setNumMapTasks() (1) A hint for the framework.

The number of reducers is set by the JboConf.setNumReduceTasks() function. This determines the total number of reduce tasks for the job. In addition, the mapred.tasktracker.tasks.maximum parameter determines the number of reduce tasks that can run in parallel on a single job tracker node.

You can find more information about the number of map and reduce jobs in (2).

(1) – http://hadoop.apache.org/mapreduce/docs/r0.21.0/api/org/apache/hadoop/mapred/JobConf.html#setNumMapTasks%28int%29
(2) – http://wiki.apache.org/hadoop/HowManyMapsAndReduces

Related Problems and Solutions