首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Hadoop异常: java.net.ConnectException

Hadoop异常: java.net.ConnectException
EN

Stack Overflow用户
提问于 2015-04-27 19:13:49
回答 3查看 4K关注 0票数 0

我已经在四台机器上安装了Hadood-2.6(分布式模式)。所有守护进程都在正常运行。但当我做标准的例子时-

代码语言:javascript
复制
hadoop jar hadoop-mapreduce-examples-2.6.0.jar teragen  10  /input

它给了我以下错误-

代码语言:javascript
复制
hadoop jar /root/exp_testing/hadoop_new/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar teragen  10  /input
15/04/28 05:45:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/04/28 05:45:51 INFO client.RMProxy: Connecting to ResourceManager at enode1/192.168.1.231:8050
15/04/28 05:45:53 INFO terasort.TeraSort: Generating 10 using 2
15/04/28 05:45:53 INFO mapreduce.JobSubmitter: number of splits:2
15/04/28 05:45:54 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1430180067597_0001
15/04/28 05:45:54 INFO impl.YarnClientImpl: Submitted application application_1430180067597_0001
15/04/28 05:45:54 INFO mapreduce.Job: The url to track the job: http://ubuntu:8088/proxy/application_1430180067597_0001/
15/04/28 05:45:54 INFO mapreduce.Job: Running job: job_1430180067597_0001
15/04/28 05:46:15 INFO mapreduce.Job: Job job_1430180067597_0001 running in uber mode : false
15/04/28 05:46:15 INFO mapreduce.Job:  map 0% reduce 0%
15/04/28 05:46:15 INFO mapreduce.Job: Job job_1430180067597_0001 failed with state FAILED due to: Application application_1430180067597_0001 failed 2 times due to Error launching appattempt_1430180067597_0001_000002. Got exception: java.net.ConnectException: Call From ubuntu/127.0.1.1 to ubuntu:60839 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731)
    at org.apache.hadoop.ipc.Client.call(Client.java:1472)
    at org.apache.hadoop.ipc.Client.call(Client.java:1399)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at com.sun.proxy.$Proxy79.startContainers(Unknown Source)
    at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96)
    at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:119)
    at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:254)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705)
    at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521)
    at org.apache.hadoop.ipc.Client.call(Client.java:1438)
    ... 9 more
. Failing the application.
15/04/28 05:46:15 INFO mapreduce.Job: Counters: 0

我有两组(每组包含4个节点)的机器,相同的设置正在为其他集工作,但是我不知道为什么我要面对一组的问题?

/etc/主机

代码语言:javascript
复制
127.0.0.1       localhost
#127.0.1.1      ubuntu
127.0.0.1       ubuntu
#192.168.1.231  ubuntu



192.168.1.231    enode1
192.168.1.232    enode2
192.168.1.233    enode3
192.168.1.234    enode4


# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
~                                                                                                                                        

core-site.xml

代码语言:javascript
复制
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
   <name>fs.defaultFS</name>
   <value>hdfs://enode1:9000/</value>
</property>

</configuration>

hdfs-site.xml

代码语言:javascript
复制
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
   <name>dfs.replication</name>
   <value>2</value>
 </property>
 <property>
   <name>dfs.namenode.name.dir</name>
   <value>file:/home/exp_testing/hdfs/namenode</value>
 </property>
 <property>
   <name>dfs.datanode.data.dir</name>
   <value>file:/home/exp_testing/hdfs/datanode</value>
 </property>
</configuration>

mapred-site.xml

代码语言:javascript
复制
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
        <property>
            <name>yarn.app.mapreduce.am.resource.mb</name>
            <value>1536</value>
        </property>
        <property>
            <name>yarn.app.mapreduce.am.command-opts</name>
            <value>-Xmx1024m</value>
    </property>
    <property>
       <name>mapreduce.framework.name</name>    
       <value>yarn</value>
    </property>
    <property>
            <name>mapreduce.map.cpu.vcores</name>
            <value>1</value>
            <description>The number of virtual cores required for each map task.</description>
    </property>
    <property>
            <name>mapreduce.reduce.cpu.vcores</name>
            <value>1</value>
            <description>The number of virtual cores required for each map task.</description>
    </property>
    <property>
            <name>mapreduce.map.memory.mb</name>
            <value>1024</value>
            <description>Larger resource limit for maps.</description>
    </property>
    <property>
            <name>mapreduce.map.java.opts</name>
            <value>-Xmx400m</value>
            <description>Heap-size for child jvms of maps.</description>
        </property>
        <property>
                <name>mapreduce.reduce.memory.mb</name>
                <value>1024</value>
                <description>Larger resource limit for reduces.</description>
        </property>
        <property>
            <name>mapreduce.reduce.java.opts</name>
            <value>-Xmx400m</value>
            <description>Heap-size for child jvms of reduces.</description>
        </property>
        <property>
            <name>mapreduce.jobtracker.address</name>
            <value>enode1:54311</value>
        </property>
</configuration>

Yarn-site.xml

代码语言:javascript
复制
<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<!-- Site specific YARN configuration properties -->
<configuration>
    <property>
        <description>Whether to enable log aggregation</description>
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        <value>10</value>
        <description>Minimum limit of memory to allocate to each container request at the Resource Manager.</description>
    </property>
    <property>
        <name>yarn.scheduler.maximum-allocation-mb</name>
        <value>6144</value>
        <description>Maximum limit of memory to allocate to each container request at the Resource Manager.</description>
    </property>
    <property>
        <name>yarn.scheduler.minimum-allocation-vcores</name>
        <value>1</value>
        <description>The minimum allocation for every container request at the RM, in terms of virtual CPU cores. Requests lower than this                    won't take effect, and the specified value will get allocated the minimum.</description>
    </property>
    <property>
        <name>yarn.scheduler.maximum-allocation-vcores</name>
        <value>32</value>
        <description>The maximum allocation for every container request at the RM, in terms of virtual CPU cores. Requests higher than this                       won't take effect, and will get capped to this value.</description>
    </property>
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>6144</value>
        <description>Physical memory, in MB, to be made available to running containers</description>
    </property>
    <property>
        <name>yarn.nodemanager.resource.cpu-vcores</name>
        <value>8</value>
        <description>Number of CPU cores that can be allocated for containers.</description>
    </property>
    <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
     </property>
     <property>
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
     </property>
     <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>enode1:8025</value>
        <description>The hostname of the RM.</description>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>enode1:8030</value>
        <description>The hostname of the RM.</description>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>enode1:8050</value>
        <description>The hostname of the RM.</description>
    </property>

</configuration>

hadoop -ls /的结果

代码语言:javascript
复制
root@ubuntu:~/exp_testing/mysrc# hadoop fs -ls /
15/04/29 00:43:57 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 2 items
drwxr-xr-x   - root supergroup          0 2015-04-29 00:43 /input
drwx------   - root supergroup          0 2015-04-29 00:43 /tmp

@san深处007734解决方案正在为我的新集群工作,我相信他的解决方案,但是我在/etc/host中的行中评论了旧的集群,它运行良好

#127.0.1.1 ubuntu

我不知道为什么会这样?

EN

回答 3

Stack Overflow用户

回答已采纳

发布于 2015-04-28 18:21:54

问题是主机名配置。如果您使用的是自定义主机名(仅在/etc/host文件中定义,而不是在dns中定义),那么hadoop有时会以奇怪的方式运行。

您正在使用名称enode1、enode2等作为节点的名称。

但在您发布的错误中,它显示:

代码语言:javascript
复制
15/04/28 05:45:54 INFO mapreduce.Job: The url to track the job: http://ubuntu:8088/proxy/application_1430180067597_0001/

这里的意思是,为了跟踪工作,网址是http://ubuntu.这意味着hadoop正在获取系统的主机名来执行其操作。

现在,最明显的解决方案之一是在/etc/host文件中添加条目(在每个节点中,包括主节点)。例如,在enode1上

代码语言:javascript
复制
192.168.1.231 ubuntu 

当您尝试格式化namenode并启动集群时,这将很好地工作。

但是,如果您试图运行某个作业,您就会遇到麻烦,因为从服务器将尝试使用地址连接资源管理器。

代码语言:javascript
复制
ubuntu/192.168.1.231

这意味着,如果您无法解析ubuntu主机名,请使用IP。但是从服务器可以解析映射到自己IP的IP。

例如,当在机器上运行的从服务器尝试连接到资源管理器时,它使用ubuntu/192.168.1.231.ubuntu主机名被解析为192.168.1.232,因为您刚刚在/etc/文件中定义了这个名称。

在执行作业期间,在日志中您应该能够看到错误:

代码语言:javascript
复制
org.apache.hadoop.ipc.Client: Retrying connect to server

它确实试图连接到资源管理器( Resource )很长一段时间,这就是为什么您的任务执行需要这么长时间的原因。因为计划在从服务器上执行的MAP任务尝试了很长时间才连接到资源管理器,最终失败。只有那些在主节点上调度的映射任务(因为您同时将主节点用作从节点)才会成功(因为ubuntu只正确地解析了主节点上的)。

解决这个问题的办法是。

  1. 停止Hadoop集群
  2. 编辑每台机器上的文件/etc/hostname,例如在机器enode1上

发自:

代码语言:javascript
复制
ubuntu

至:

代码语言:javascript
复制
enode1 

enode2,enode3在相应的机器上。

  1. 删除/etc/host文件中ubuntu的条目。
  2. 重新启动
  3. 确保主机名由命令更改 主机名
  4. 格式化Namdenode
  5. 启动集群,并再次运行该根。应该没问题的。
票数 0
EN

Stack Overflow用户

发布于 2015-04-28 16:27:47

尝试从/etc/hosts中删除这些行,如果不使用ipv6,则禁用它:

代码语言:javascript
复制
127.0.0.1       localhost
127.0.0.1       ubuntu
# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
~   

IPv6的一个问题是,对各种与网络相关的Hadoop配置选项使用0.0.0.0将导致Hadoop绑定到IPv6地址。

因此,如果您不使用IPv6,最好禁用它,因为它在运行Hadoop时会导致问题。

若要禁用IPv6,请在您选择的编辑器中打开/etc/sysctl.conf,并在文件末尾添加以下行:

代码语言:javascript
复制
# disable ipv6
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1

希望这能帮到你

票数 0
EN

Stack Overflow用户

发布于 2016-09-17 09:25:35

我遇到了同样的问题,最后幸运地解决了这个问题。问题在于主人!

只是苏根

主机名主机在主节点上

从节点上的主机名从节点

重新启动集群。

没问题的

...........................................................

这是我的机器上的样子。

(1)我的问题是:

代码语言:javascript
复制
miaofu@miaofu-Virtual-Machine:~/hadoop-2.6.4/etc/hadoop$ hadoop jar ../../share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.4.jar wordcount /in /out2
16/09/17 15:41:14 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.202.104:8032
16/09/17 15:41:17 INFO input.FileInputFormat: Total input paths to process : 9
16/09/17 15:41:17 INFO mapreduce.JobSubmitter: number of splits:9
16/09/17 15:41:17 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1474096034614_0002
16/09/17 15:41:18 INFO impl.YarnClientImpl: Submitted application application_1474096034614_0002
16/09/17 15:41:18 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1474096034614_0002/
16/09/17 15:41:18 INFO mapreduce.Job: Running job: job_1474096034614_0002
16/09/17 15:41:26 INFO mapreduce.Job: Job job_1474096034614_0002 running in uber mode : false
16/09/17 15:41:26 INFO mapreduce.Job:  map 0% reduce 0%
16/09/17 15:41:39 INFO mapreduce.Job:  map 11% reduce 0%
16/09/17 15:41:40 INFO mapreduce.Job:  map 22% reduce 0%
16/09/17 15:41:41 INFO mapreduce.Job:  map 67% reduce 0%
16/09/17 15:41:54 INFO mapreduce.Job:  map 67% reduce 22%
16/09/17 15:44:29 INFO mapreduce.Job: Task Id : attempt_1474096034614_0002_m_000006_0, Status : FAILED
Container launch failed for container_1474096034614_0002_01_000008 : java.net.ConnectException: Call From miaofu-Virtual-Machine/127.0.0.1 to localhost:57019 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
    at sun.reflect.GeneratedConstructorAccessor32.newInstance(Unknown Source)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731)
    at org.apache.hadoop.ipc.Client.call(Client.java:1473)
    at org.apache.hadoop.ipc.Client.call(Client.java:1400)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at com.sun.proxy.$Proxy36.startContainers(Unknown Source)
    at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96)
    at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy37.startContainers(Unknown Source)
    at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:151)
    at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:369)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

(2)这是我的配置:/etc/host:

代码语言:javascript
复制
127.0.0.1       localhost
127.0.0.1 miaofu-Virtual-Machine
192.168.202.104 master
192.168.202.31 slave01
192.168.202.105 slave02
# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

(3)在主节点上设置主机名:

代码语言:javascript
复制
root@miaofu-Virtual-Machine:/home/miaofu# vi /etc/hostname 
root@miaofu-Virtual-Machine:/home/miaofu# hostname
miaofu-Virtual-Machine
root@miaofu-Virtual-Machine:/home/miaofu# hostname master
root@miaofu-Virtual-Machine:/home/miaofu# hostname
master

在奴隶身上:

代码语言:javascript
复制
miaofu@miaofu-Virtual-Machine:~$ su root
密码: 
^Z
[3]+  已停止               su root
miaofu@miaofu-Virtual-Machine:~$ sudo passwd root
[sudo] password for miaofu: 
输入新的 UNIX 密码: 
重新输入新的 UNIX 密码: 
passwd:已成功更新密码
miaofu@miaofu-Virtual-Machine:~$ hostname slave02
hostname: you must be root to change the host name
miaofu@miaofu-Virtual-Machine:~$ su root
密码: 
root@miaofu-Virtual-Machine:/home/miaofu# 
root@miaofu-Virtual-Machine:/home/miaofu# 
root@miaofu-Virtual-Machine:/home/miaofu# 
root@miaofu-Virtual-Machine:/home/miaofu# hostname slave02
root@miaofu-Virtual-Machine:/home/miaofu# hostname 
slave02

(4)重新启动集群

代码语言:javascript
复制
stop-yarn.sh
stop-dfs.sh
cd
rm -r hadoop-2.6.4/tmp/*

hadoop namenode -format
start-dfs.sh
start-yarn.sh 

(5)只需运行单词计数

代码语言:javascript
复制
miaofu@miaofu-Virtual-Machine:~$ hadoop fs -mkdir /in
miaofu@miaofu-Virtual-Machine:~$ vi retry.sh 
miaofu@miaofu-Virtual-Machine:~$ hadoop fs -put etc/hadoop/*.xml /in
put: `etc/hadoop/*.xml': No such file or directory
miaofu@miaofu-Virtual-Machine:~$ hadoop fs -put hadoop-2.6.4/etc/hadoop/*.xml /in
jpmiaofu@miaofu-Virtual-Machine:~$ jps
61591 Jps
60601 ResourceManager
60297 SecondaryNameNode
60732 NodeManager
60092 DataNode
59927 NameNode
miaofu@miaofu-Virtual-Machine:~$ hadoop jar hadoop-2.6.4/
bin/         etc/         include/     lib/         LICENSE.txt  NOTICE.txt   sbin/        tmp/         
conf.sh      home/        input/       libexec/     logs/        README.txt   share/       
miaofu@miaofu-Virtual-Machine:~$ hadoop jar hadoop-2.6.4/share/
doc/    hadoop/ 
miaofu@miaofu-Virtual-Machine:~$ hadoop jar hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.4.jar wordcount /in /out
^Z
[1]+  已停止               hadoop jar hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.4.jar wordcount /in /out
miaofu@miaofu-Virtual-Machine:~$ hadoop jar hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.4.jar wordcount /in /out3
16/09/17 16:46:24 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.202.104:8032
16/09/17 16:46:25 INFO input.FileInputFormat: Total input paths to process : 9
16/09/17 16:46:25 INFO mapreduce.JobSubmitter: number of splits:9
16/09/17 16:46:26 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1474101888060_0001
16/09/17 16:46:26 INFO impl.YarnClientImpl: Submitted application application_1474101888060_0001
16/09/17 16:46:26 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1474101888060_0001/
16/09/17 16:46:26 INFO mapreduce.Job: Running job: job_1474101888060_0001
16/09/17 16:46:35 INFO mapreduce.Job: Job job_1474101888060_0001 running in uber mode : false
16/09/17 16:46:35 INFO mapreduce.Job:  map 0% reduce 0%
16/09/17 16:46:44 INFO mapreduce.Job:  map 22% reduce 0%
16/09/17 16:46:45 INFO mapreduce.Job:  map 33% reduce 0%
16/09/17 16:46:48 INFO mapreduce.Job:  map 67% reduce 0%
16/09/17 16:46:49 INFO mapreduce.Job:  map 100% reduce 0%
16/09/17 16:46:51 INFO mapreduce.Job:  map 100% reduce 100%
16/09/17 16:46:52 INFO mapreduce.Job: Job job_1474101888060_0001 completed successfully
16/09/17 16:46:52 INFO mapreduce.Job: Counters: 50
    File System Counters
        FILE: Number of bytes read=21875
        FILE: Number of bytes written=1110853
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=28532
        HDFS: Number of bytes written=10579
        HDFS: Number of read operations=30
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=2
    Job Counters 
        Killed map tasks=1
        Launched map tasks=9
        Launched reduce tasks=1
        Data-local map tasks=9
        Total time spent by all maps in occupied slots (ms)=84614
        Total time spent by all reduces in occupied slots (ms)=4042
        Total time spent by all map tasks (ms)=84614
        Total time spent by all reduce tasks (ms)=4042
        Total vcore-milliseconds taken by all map tasks=84614
        Total vcore-milliseconds taken by all reduce tasks=4042
        Total megabyte-milliseconds taken by all map tasks=86644736
        Total megabyte-milliseconds taken by all reduce tasks=4139008
    Map-Reduce Framework
        Map input records=796
        Map output records=2887
        Map output bytes=36776
        Map output materialized bytes=21923
        Input split bytes=915
        Combine input records=2887
        Combine output records=1265
        Reduce input groups=606
        Reduce shuffle bytes=21923
        Reduce input records=1265
        Reduce output records=606
        Spilled Records=2530
        Shuffled Maps =9
        Failed Shuffles=0
        Merged Map outputs=9
        GC time elapsed (ms)=590
        CPU time spent (ms)=6470
        Physical memory (bytes) snapshot=2690990080
        Virtual memory (bytes) snapshot=8380964864
        Total committed heap usage (bytes)=1966604288
    Shuffle Errors
        BAD_ID=0
        CONNECTION=0
        IO_ERROR=0
        WRONG_LENGTH=0
        WRONG_MAP=0
        WRONG_REDUCE=0
    File Input Format Counters 
        Bytes Read=27617
    File Output Format Counters 
        Bytes Written=10579

如果您有问题,请与我联系13347217145@163.com

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/29904083

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档