首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >由于ConnectException,Hadoop映射失败

由于ConnectException,Hadoop映射失败
EN

Stack Overflow用户
提问于 2014-01-07 13:27:03
回答 1查看 4.6K关注 0票数 5

我试图在Hadoop2.2.0集群上运行wordcount示例。由于这一例外情况,许多地图都失败了:

代码语言:javascript
复制
2014-01-07 05:07:12,544 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.net.ConnectException: Call From slave2-machine/127.0.1.1 to slave2-machine:49222 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
    at org.apache.hadoop.ipc.Client.call(Client.java:1351)
    at org.apache.hadoop.ipc.Client.call(Client.java:1300)
    at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:231)
    at com.sun.proxy.$Proxy6.getTask(Unknown Source)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:133)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:708)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:547)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:642)
    at org.apache.hadoop.ipc.Client$Connection.access$2600(Client.java:314)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1399)
    at org.apache.hadoop.ipc.Client.call(Client.java:1318)
    ... 4 more

每次我运行作业时,都会更改有问题的端口,但map任务仍然失败。我不知道哪个程序应该听那个端口。我还试着在运行期间跟踪netstat -ntlp输出,没有一个进程从未听过端口。

更新:主节点/etc/hosts内容如下:

代码语言:javascript
复制
127.0.0.1   localhost
127.0.1.1   master-machine

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.1.101 slave1 slave1-machine
192.168.1.102 slave2 slave2-machine
192.168.1.1 master

对于slave1来说是:

代码语言:javascript
复制
127.0.0.1   localhost
127.0.1.1   slave1-machine

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.1.1 master
192.168.1.101 slave1
192.168.1.102 slave2 slave2-machine

对于slave2来说,它就像slave1一样,有一些微小的变化,我想您可以猜到。最后,yarn/hadoop/etc/hadoop/slaves在主机上的内容如下:

代码语言:javascript
复制
slave1
slave2
EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2014-01-16 20:28:19

1.检查hadoop节点是否可以相互执行ssh。2.检查所有配置文件中hadoop守护进程的地址和端口。3.检查所有节点的/etc/主机。这是一个检查您是否正确启动集群的有用链接:集群设置

这样啊,原来是这么回事!您的/etc/主机不正确。您应该删除127.0.1.1行。我是说他们应该是这样的:

代码语言:javascript
复制
127.0.0.1       localhost
192.168.1.101    master
192.168.1.103    slave1
192.168.1.104    slave2
192.168.1.105    slave3

复制粘贴给所有这样的奴隶。另外,奴隶们也应该能够互相交流。

票数 8
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/20972852

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档