Writing Hadoop reduces output to Elasticsearch… here is a solution to the problem.
Writing Hadoop reduces output to Elasticsearch
I had some trouble understanding how to write the output of a simple Hadoop back to Elasticsearch.
The job configuration is:
job.setOutputFormatClass(EsOutputFormat.class);
job.setOutputKeyClass(NullWritable.class);
job.setOutputValueClass(MapWritable.class);
Reducer does:
final DoubleWritable average = new DoubleWritable(sum / size);
final MapWritable output = new MapWritable();
output.put(key, average);
context.write(NullWritable.get(), output);
And yet I got this (for me) inexplicable anomaly :
14/08/15 16:59:54 INFO mapreduce. Job: Task Id : attempt_1408106733881_0013_r_000000_2, Status : FAILED Error:
org.elasticsearch.hadoop.EsHadoopIllegalArgumentException:
[org.elasticsearch.hadoop.serialization.field.MapWritableFieldExtractor@5796fabe] cannot extract value from object [org.apache.hadoop.io.MapWritable@dcdb8e97]
at org.elasticsearch.hadoop.serialization.bulk.TemplatedBulk$FieldWriter.write(TemplatedBulk.java:49)
at org.elasticsearch.hadoop.serialization.bulk.TemplatedBulk.writeTemplate(TemplatedBulk.java:101)
at org.elasticsearch.hadoop.serialization.bulk.TemplatedBulk.write(TemplatedBulk.java:77)
at org.elasticsearch.hadoop.rest.RestRepository.writeToIndex(RestRepository.java:130)
at org.elasticsearch.hadoop.mr.EsOutputFormat$EsRecordWriter.write(EsOutputFormat.java:161)
at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:558)
In the context.write() call.
I’m a little confused, any ideas?
Solution
Turns out I made a mistake in the job configuration. I added the following:
configuration.set("es.mapping.id", "_id");
The _id field is not actually added in the outgoing Mapwritable; This causes ES to throw an exception.
This is useful if MapWritableFleldExtractor logs the field in which it failed.