我有一个2节点的集群设置。一种是“主从”,另一种是“从”。
名称节点服务已启动
"Slave“节点未连接到主节点,出现错误
slave:/usr/lib/hadoop-0.20/conf# tailf /usr/lib/hadoop-0.20/logs/hadoop-hadoop-datanode-slave.log 2014-03-02 10:43:07,816 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master-slave/192.168.1.118:54310. Already tried 4 time(s). 2014-03-02 10:43:08,817 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master-slave/192.168.1.118:54310. Already tried 5 time(s). 2014-03-02 10:43:09,820 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master-slave/192.168.1.118:54310. Already tried 6 time(s). 2014-03-02 10:43:10,821 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master-slave/192.168.1.118:54310. Already tried 7 time(s).
a)。master-slave:/usr/lib/hadoop/conf# lsof -i:54310 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME java 3080 hdfs 62u IPv4 22507 0t0 TCP master-slave:54310 (LISTEN)
b)。关于core-site.xml中的Slave
<property> <name>fs.default.name</name> <value>hdfs://master-slave:54310</value> <description>The name of the default file system. Either the literal string "local" or a host:port for NDFS.</description> <final>true</final> </property>
c)。主节点/etc/hosts(与从节点相同)
master-slave:/usr/lib/hadoop/conf# cat /etc/hosts 127.0.0.1 localhost 127.0.1.1 master-slave 192.168.1.118 master-slave 192.168.1.120 slave
d)。我已经禁用了IPV6。
e)。我无法从从机远程登录到端口54310,但可以在端口22上进行远程登录
看起来很奇怪,请帮我解决这个问题。我已经做了我所知道的所有改变,但没有运气。
发布于 2014-03-03 01:57:49
对于将来的参考,此处是由于主机文件导致的问题,请编辑该文件并删除此条目
127.0.1.1 master-slave 移除后,它工作正常。
https://stackoverflow.com/questions/22124218
复制相似问题