Implementing Custom Writable
Implementing Custom Writable
Hadoop MapReduce uses implementations of Writables for interacting with user-provided Mappers and Reducers.
Hadoop provides a lot of implementations of Writables which are listed
here, but sometimes we need to pass custom objects and these custom objects should implement Hadoop's Writable interface.
In this post we are going to describe a custom class IntPair.
To implement the Writable interface we require two methods:
public interface Writable {
void readFields(DataInput in);
void write(DataOutput out);
}
The code for IntPair is given below:
public class IntPair implements Writable{
private IntWritable first;
private IntWritable second;
public IntPair() {
set(new IntWritable(), new IntWritable());
}
public IntPair(Integer first, Integer second) {
set(new IntWritable(first), new IntWritable(second));
}
public void set(IntWritable first, IntWritable second) {
this.first = first;
this.second = second;
}
public IntWritable getFirst() {
return first;
}
public Integer getFirstInt() {
return new Integer(first.toString());
}
public Integer getSecondInt() {
return new Integer(second.toString());
}
public IntWritable getSecond() {
return second;
}
public void write(DataOutput out) throws IOException {
first.write(out);
second.write(out);
}
public void readFields(DataInput in) throws IOException {
first.readFields(in);
second.readFields(in);
}
}
Now we can use this IntPair class in Hadoop MapReduce as value type.
If we want to use IntPair as key in MapReduce then it needs to implement WritableComparable, which we shall cover in a different post.
hi your post on implementing the custom writables was very useful do keep posting Hadoop Training in Velachery | Hadoop Training .
ReplyDeleteHadoop Training in Chennai | Hadoop .