首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Hadoop 2.3.0 wordcount永远运行

Hadoop 2.3.0 wordcount永远运行
EN

Stack Overflow用户
提问于 2015-03-15 23:01:08
回答 2查看 1.1K关注 0票数 6

我正在尝试通过运行wordcount作业来测试我的hadoop安装。我的问题是,作业停留在接受状态,似乎永远运行。我正在使用Hadoop2.3.0,并试图通过遵循这个问题的答案here来解决这个问题,但它对我不起作用。

这就是我所拥有的:

代码语言:javascript
复制
C:\hadoop-2.3.0>yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.3.0.jar wordcount /data/test.txt /data/output
15/03/15 15:36:07 INFO client.RMProxy: Connecting to ResourceManager at/0.0.0.0:8032
15/03/15 15:36:09 INFO input.FileInputFormat: Total input paths to process : 1
15/03/15 15:36:10 INFO mapreduce.JobSubmitter: number of splits:1
15/03/15 15:36:10 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_14 26430101974_0001
15/03/15 15:36:11 INFO impl.YarnClientImpl: Submitted application application_14 26430101974_0001
15/03/15 15:36:11 INFO mapreduce.Job: The url to track the job: http://Agata-PC:8088/proxy/application_1426430101974_0001/
15/03/15 15:36:11 INFO mapreduce.Job: Running job: job_1426430101974_0001

这是我的mapred-site.xml

代码语言:javascript
复制
<configuration>
<property>
   <name>mapreduce.framework.name</name>
   <value>yarn</value>
</property>
 <property>
    <name>mapred.job.tracker</name>
    <value>127.0.0.1:9001</value>
</property>
   <property>
    <name>mapreduce.jobtracker.staging.root.dir</name>
    <value>/user</value>
</property>
<property>
    <name>mapreduce.history.server.http.address</name>
    <value>127.0.0.1:51111</value>
    <description>Http address of the history server</description>
    <final>false</final>
</property>
<property>
    <name>yarn.app.mapreduce.am.resource.mb</name>
    <value>1024</value>
</property>
<property>
    <name>yarn.app.mapreduce.am.command-opts</name>
    <value>-Xmx768m</value>
</property>
<property>
    <name>mapreduce.map.cpu.vcores</name>
    <value>1</value>
    <description>The number of virtual cores required for each map task.</description>
</property>
<property>
    <name>mapreduce.reduce.cpu.vcores</name>
    <value>1</value>
    <description>The number of virtual cores required for each map task.</description>
</property>
<property>
    <name>mapreduce.map.memory.mb</name>
    <value>1024</value>
    <description>Larger resource limit for maps.</description>
</property>
<property>
    <name>mapreduce.map.java.opts</name>
    <value>-Xmx768m</value>
    <description>Heap-size for child jvms of maps.</description>
</property>
<property>
    <name>mapreduce.reduce.memory.mb</name>
    <value>1024</value>
    <description>Larger resource limit for reduces.</description>
</property>
<property>
    <name>mapreduce.reduce.java.opts</name>
    <value>-Xmx768m</value>
    <description>Heap-size for child jvms of reduces.</description>
</property>
</configuration>

这是我的yarn-site.xml

代码语言:javascript
复制
<configuration>
    <property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        <value>128</value>
        <description>Minimum limit of memory to allocate to each container request at the Resource Manager.</description>
    </property>
    <property>
        <name>yarn.scheduler.minimum-allocation-vcores</name>
        <value>1</value>
        <description>The minimum allocation for every container request at the RM, in terms of virtual CPU cores. Requests lower than this won't take effect, and the specified value will get allocated the minimum.</description>
    </property>
    <property>
        <name>yarn.scheduler.maximum-allocation-vcores</name>
        <value>2</value>
        <description>The maximum allocation for every container request at the RM, in terms of virtual CPU cores. Requests higher than this won't take effect, and will get capped to this value.</description>
    </property>
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>2048</value>
        <description>Physical memory, in MB, to be made available to running containers</description>
    </property>
    <property>
        <name>yarn.nodemanager.resource.cpu-vcores</name>
        <value>4</value>
        <description>Number of CPU cores that can be allocated for containers.</description>
    </property>
</configuration>

任何帮助都是非常感谢的。

EN

回答 2

Stack Overflow用户

发布于 2015-03-16 02:22:49

您是否尝试重新启动hadoop的进程或集群?可能还有一些作品还在运行。

按照作业的url或通过hadoop url查看日志可能会有所帮助。

干杯。

票数 1
EN

Stack Overflow用户

发布于 2015-03-16 10:39:22

我之前遇到过类似的问题,你可能会在mapper或reducer中有一个无限循环。检查你的reducer是否正确地处理了iterable。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/29062085

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档