Java – What does EOF exception mean in hadoop namenode connection from hbase/filesystem?

What does EOF exception mean in hadoop namenode connection from hbase/filesystem?… here is a solution to the problem.

What does EOF exception mean in hadoop namenode connection from hbase/filesystem?

This is both a general question about Java EOF exceptions and Hadoop’s EOF exceptions, which are related to jar interoperability. Comments and answers on either topic are acceptable.

Background

I’ve noticed that some threads discuss a mysterious exception that ends up being caused by the “readInt” method. This exception seems to have some generic meaning independent of Hadoop, but is ultimately caused by the interoperability of Hadoop jars.

In my case, I got it when I tried to create a new FileSystem object in hadoop in Java.

Question

My question is: what happened and why does reading an integer throw an EOF exception? What does this EOF exception refer to, and why is such an exception thrown if the two jars are not interoperable?

Secondly, I would also like to know how to fix this error so that I can connect remotely and read/write to the Hadoops filesystem using the HDFS protocol and Java API….

java.io.IOException: Call to /10.0.1.37:50070 failed on local exception: java.io.EOFException
    at org.apache.hadoop.ipc.Client.wrapException(Client.java:1139)
    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy0.getProtocolVersion(Unknown Source)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:398)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:384)
    at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:111)
    at org.apache.hadoop.hdfs.DFSClient. (DFSClient.java:213)
    at org.apache.hadoop.hdfs.DFSClient. (DFSClient.java:180)
    at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1514)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:67)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1548)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1530)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:228)
    at sb. HadoopRemote.main(HadoopRemote.java:35)
Caused by: java.io.EOFException
    at java.io.DataInputStream.readInt(DataInputStream.java:375)
    at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:819)
    at org.apache.hadoop.ipc.Client$Connection.run(Client.java:720)

Solution

About Hadoop: I fixed a bug! You need to make sure that core-site.xml serves 0.0.0.0 and not 127.0.0.1 (localhost).

If you get an EOF exception, the port on the IP is externally unreachable, so there is no data to read between the Hadoop client/server IPC.

Related Problems and Solutions