A loadFileSystems error occurred when calling a program that uses libhdfs
The code is libhdfs test code.
int main(int argc, char **argv)
{
hdfsFS fs = hdfsConnect("hdfs://labossrv14", 9000);
const char* writePath = "/libhdfs_test.txt";
hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY| O_CREAT, 0, 0, 0);
if(!writeFile)
{
fprintf(stderr, "Failed to open %s for writing!\n", writePath);
exit(-1);
}
char* buffer = "Hello, libhdfs!";
tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer, strlen(buffer)+1);
if (hdfsFlush(fs, writeFile))
{
fprintf(stderr, "Failed to 'flush' %s\n", writePath);
exit(-1);
}
hdfsCloseFile(fs, writeFile);
}
I worked hard to compile this code, but I couldn’t run the program. The error message is as follows.
loadFileSystems error:
(unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
hdfsBuilderConnect(forceNewInstance=0, nn=labossrv14, port=9000, kerbTicketCachePath=(NULL), userName=(NULL)) error:
(unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
hdfsOpenFile(/libhdfs_test.txt): constructNewObjectOfPath error:
(unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
Failed to open /libhdfs_test.txt for writing!
I > official document according to Play with this stuff. AND I FOUND THAT THE PROBLEM COULD BE THAT THE CLASSPATH IS INCORRECT.
Below is my CLASSPATH, which is a combination of the classpath generated by “hadoop classpath –glob” and the lib path of jdk and jre.
export CLASSPATH=/home/junzhao/hadoop/hadoop-2.5.2/etc/hadoop:/home/junzhao/hadoop/hadoop-2.5.2/share/hadoop/common/lib/*:/home/junzhao/hadoop/hadoop-2.5.2/ share/hadoop/common/*:/home/junzhao/hadoop/hadoop-2.5.2/share/hadoop/hdfs:/home/junzhao/hadoop/hadoop-2.5.2/share/hadoop/hdfs/lib/*:/home/junzhao/hadoop/hadoop-2.5.2/ share/hadoop/hdfs/*:/home/junzhao/hadoop/hadoop-2.5.2/share/hadoop/yarn/lib/*:/home/junzhao/hadoop/hadoop-2.5.2/share/hadoop/yarn/*:/home/junzhao/hadoop/hadoop-2.5.2/ share/hadoop/mapreduce/lib/*:/home/junzhao/hadoop/hadoop-2.5.2/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar:/usr/lib/jvm/java-8-oracle/lib:/usr/lib/jvm/ java-8-oracle/jre/lib:$CLASSPATH
Does anyone have some good solutions? Thanks!
Solution
I went through some of the information in the tutorial and some of the questions I asked earlier. IT TURNED OUT THAT THE PROBLEM WAS THAT JNI DID NOT EXTEND THE WILDCARD CHARACTERS IN THE CLASSPATH. SO I JUST PUT ALL THE JARS INTO THE CLASSPATH AND THE PROBLEM WAS SOLVED.
Since this command “hadoop classpath –glob” also generates wildcards, it explains why official document says this
It is not valid to use wildcard syntax for specifying multiple jars.
It may be useful to run hadoop classpath –glob or hadoop classpath
–jar to generate the correct classpath for your deployment.
Yesterday I misunderstood this paragraph.
See also Hadoop C++ HDFS test running Exception and Can JNI be made to honour wildcard expansion in the classpath?