The URI for the Java – NameNode address is invalid

The URI for the Java – NameNode address is invalid … here is a solution to the problem.

The URI for the Java – NameNode address is invalid

I’m trying to set up a Cloudera Hadoop cluster with one master node containing namenode, secondarynamenode, and jobtracker, and two other nodes containing datanode and The node of the tasktracker. Cloudera version 4.6 and the operating system is ubuntu precise x64. Also, this cluster is created from an AWS instance. SSH passwordless is also set, and Java installs Oracle-7.

Whenever I execute sudo service hadoop-hdfs-namenode start I get:

2014-05-14 05:08:38,023 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
java.lang.IllegalArgumentException: Invalid URI for NameNode address (check fs.defaultFS): file:/// has no authority.
        at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:329)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:317)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.getRpcServerAddress(NameNode.java:370)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.loginAsNameNodeUser(NameNode.java:422)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:442)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:621)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:606)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1177)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1241)

My core-site .xml:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->
<configuration>
   <property>
      <name>fs.defaultFS</name>
      <value>hdfs://<master-ip>:8020</value>
   </property>
</configuration>

mapred-site.xml:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->
<configuration>
   <property>
      <name>mapred.job.tracker</name>
      <value>hdfs://<master-ip>:8021</value>
   </property>
</configuration>

hdfs-site.xml:

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->
<configuration>
   <property>
      <name>dfs.replication</name>
      <value>2</value>
   </property>
   <property>
      <name>dfs.permissions</name>
      <value>false</value>
   </property>
</configuration>

I’ve tried using public ip, private-ip, public dns and fqdn, but the result is the same.
The directory /etc/hadoop/conf.empty looks like this:

-rw-r--r-- 1 root root   2998 Feb 26 10:21 capacity-scheduler.xml
-rw-r--r-- 1 root hadoop 1335 Feb 26 10:21 configuration.xsl
-rw-r--r-- 1 root root    233 Feb 26 10:21 container-executor.cfg
-rwxr-xr-x 1 root root    287 May 14 05:09 core-site.xml
-rwxr-xr-x 1 root root   2445 May 14 05:09 hadoop-env.sh
-rw-r--r-- 1 root hadoop 1774 Feb 26 10:21 hadoop-metrics2.properties
-rw-r--r-- 1 root hadoop 2490 Feb 26 10:21 hadoop-metrics.properties
-rw-r--r-- 1 root hadoop 9196 Feb 26 10:21 hadoop-policy.xml
-rwxr-xr-x 1 root root    332 May 14 05:09 hdfs-site.xml
-rw-r--r-- 1 root hadoop 8735 Feb 26 10:21 log4j.properties
-rw-r--r-- 1 root root   4113 Feb 26 10:21 mapred-queues.xml.template
-rwxr-xr-x 1 root root    290 May 14 05:09 mapred-site.xml
-rw-r--r-- 1 root root    178 Feb 26 10:21 mapred-site.xml.template
-rwxr-xr-x 1 root root     12 May 14 05:09 masters
-rwxr-xr-x 1 root root     29 May 14 05:09 slaves
-rw-r--r-- 1 root hadoop 2316 Feb 26 10:21 ssl-client.xml.example
-rw-r--r-- 1 root hadoop 2251 Feb 26 10:21 ssl-server.xml.example
-rw-r--r-- 1 root root   2513 Feb 26 10:21 yarn-env.sh
-rw-r--r-- 1 root root   2262 Feb 26 10:21 yarn-site.xml

and slaves

lists the IP addresses of the two slaves:

<slave1-ip>
<slave2-ip>

Executing

update-alternatives --get-selections | grep hadoop
hadoop-conf                    auto     /etc/hadoop/conf.empty

I did a lot of searching but didn’t get any information that could help me solve the problem. Can someone provide any clues?

Solution

I had the same issue and solved it by formatting the name node. Here is the command:

hdfs namenode -format

The core-site.xml entry is:

<configuration>
   <property>
      <name>fs.defaultFS</name>
      <value>hdfs://localhost:9000</value>
   </property>
</configuration>

That would definitely solve the problem.

Related Problems and Solutions