i can start hadoop sucess but datanode[slave] can’t connect namenode[master]
2016-11-09 16:00:15,953 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: master/192.168.1.101:9000 2016-11-09 16:00:21,957 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/192.168.1.101:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-11-09 16:00:22,965 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/192.168.1.101:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
detail /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.1.101 master 192.168.1.102 slave1
core-site.xml
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://master:9000</value> </property> </configuration>
and hdfs-site.xml
<property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:///opt/volume/namenode</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:///opt/volume/datanode</value> </property>
Advertisement
Answer
1) check if firewall is restricting port
sudo iptables -L
if so, flush it
To open 9000,
$ sudo iptables -A INPUT -p tcp -m tcp --dport 9000 -j ACCEPT $ sudo /etc/init.d/iptables save
2) check namenode logs for any issues under /var/log/hadoop