首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Hadoop Map Job因内存错误而停滞

Hadoop Map Job因内存错误而停滞
EN

Stack Overflow用户
提问于 2015-06-23 14:18:37
回答 1查看 792关注 0票数 0

我正在运行一个Hive脚本,在一个包含5452689行和7 7GB数据大小的表中执行一些操作。然而,我的map reduce停留在大约70%的某个地方,并给出了一个no space left错误

错误如下:

代码语言:javascript
复制
Hadoop job information for Stage-4: number of mappers: 27; number of reducers: 1
2015-06-23 01:47:43,748 Stage-4 map = 0%,  reduce = 0%
2015-06-23 01:48:04,550 Stage-4 map = 1%,  reduce = 0%, Cumulative CPU 258.06 sec
2015-06-23 01:48:06,661 Stage-4 map = 2%,  reduce = 0%, Cumulative CPU 331.98 sec
2015-06-23 01:48:07,796 Stage-4 map = 3%,  reduce = 0%, Cumulative CPU 370.35 sec
2015-06-23 01:48:09,931 Stage-4 map = 4%,  reduce = 0%, Cumulative CPU 406.65 sec
2015-06-23 01:48:28,778 Stage-4 map = 7%,  reduce = 0%, Cumulative CPU 973.33 sec
2015-06-23 01:48:30,987 Stage-4 map = 8%,  reduce = 0%, Cumulative CPU 1034.17 sec
2015-06-23 01:48:34,251 Stage-4 map = 11%,  reduce = 0%, Cumulative CPU 1121.22 sec
2015-06-23 01:48:35,419 Stage-4 map = 13%,  reduce = 0%, Cumulative CPU 1173.4 sec
2015-06-23 01:48:36,458 Stage-4 map = 18%,  reduce = 0%, Cumulative CPU 1191.29 sec
2015-06-23 01:48:37,499 Stage-4 map = 22%,  reduce = 0%, Cumulative CPU 1215.44 sec
2015-06-23 01:48:38,607 Stage-4 map = 29%,  reduce = 0%, Cumulative CPU 1267.07 sec
2015-06-23 01:48:39,671 Stage-4 map = 32%,  reduce = 0%, Cumulative CPU 1289.57 sec
2015-06-23 01:48:40,883 Stage-4 map = 34%,  reduce = 0%, Cumulative CPU 1309.96 sec
2015-06-23 01:48:41,922 Stage-4 map = 36%,  reduce = 0%, Cumulative CPU 1366.31 sec
2015-06-23 01:48:48,693 Stage-4 map = 39%,  reduce = 0%, Cumulative CPU 1554.9 sec
2015-06-23 01:48:54,121 Stage-4 map = 40%,  reduce = 0%, Cumulative CPU 1709.04 sec
2015-06-23 01:49:00,973 Stage-4 map = 43%,  reduce = 0%, Cumulative CPU 1895.86 sec
2015-06-23 01:49:03,099 Stage-4 map = 46%,  reduce = 0%, Cumulative CPU 1976.89 sec
2015-06-23 01:49:05,180 Stage-4 map = 49%,  reduce = 0%, Cumulative CPU 2003.08 sec
2015-06-23 01:49:06,225 Stage-4 map = 58%,  reduce = 0%, Cumulative CPU 2062.33 sec
2015-06-23 01:49:07,353 Stage-4 map = 60%,  reduce = 0%, Cumulative CPU 2067.9 sec
2015-06-23 01:49:08,388 Stage-4 map = 66%,  reduce = 0%, Cumulative CPU 2087.55 sec
2015-06-23 01:49:09,551 Stage-4 map = 74%,  reduce = 2%, Cumulative CPU 2112.96 sec
2015-06-23 01:49:10,607 Stage-4 map = 75%,  reduce = 2%, Cumulative CPU 2118.14 sec
2015-06-23 01:49:11,669 Stage-4 map = 19%,  reduce = 0%, Cumulative CPU 433.75 sec
2015-06-23 01:49:12,699 Stage-4 map = 16%,  reduce = 0%, Cumulative CPU 350.93 sec
2015-06-23 01:49:14,760 Stage-4 map = 15%,  reduce = 0%, Cumulative CPU 263.95 sec
2015-06-23 01:49:26,177 Stage-4 map = 16%,  reduce = 0%, Cumulative CPU 341.29 sec
2015-06-23 01:49:31,365 Stage-4 map = 15%,  reduce = 0%, Cumulative CPU 334.86 sec
2015-06-23 01:49:39,713 Stage-4 map = 23%,  reduce = 0%, Cumulative CPU 300.53 sec
2015-06-23 01:49:40,758 Stage-4 map = 15%,  reduce = 0%, Cumulative CPU 300.53 sec
2015-06-23 01:49:43,868 Stage-4 map = 100%,  reduce = 100%, Cumulative CPU 263.95 sec
MapReduce Total cumulative CPU time: 4 minutes 23 seconds 950 msec
Ended Job = job_1434953415026_0374 with errors
Error during job, obtaining debugging information...
Examining task ID: task_1434953415026_0374_m_000025 (and more) from job job_1434953415026_0374
Examining task ID: task_1434953415026_0374_m_000004 (and more) from job job_1434953415026_0374
Examining task ID: task_1434953415026_0374_m_000019 (and more) from job job_1434953415026_0374
Examining task ID: task_1434953415026_0374_m_000003 (and more) from job job_1434953415026_0374
Examining task ID: task_1434953415026_0374_m_000022 (and more) from job job_1434953415026_0374
Examining task ID: task_1434953415026_0374_m_000002 (and more) from job job_1434953415026_0374
Examining task ID: task_1434953415026_0374_m_000005 (and more) from job job_1434953415026_0374

Task with the most failures(4): 
-----
Task ID:
  task_1434953415026_0374_m_000000

URL:
  http://pfaquaap1u:8088/taskdetails.jsp?jobid=job_1434953415026_0374&tipid=task_1434953415026_0374_m_000000
-----
Diagnostic Messages for this Task:
FSError: java.io.IOException: No space left on device

但是,运行df会显示我在本地系统上有超过80 GB的空闲空间。df -i还显示没有一个inode被过度使用。如果我使用较小的数据输入,该脚本运行良好。有人能告诉我我应该在这里做什么吗?提前谢谢。

EN

回答 1

Stack Overflow用户

发布于 2015-06-23 15:13:36

我不认为df会向你显示硬盘的使用情况。相反,请尝试hadoop fs -df -h (请参阅此问题:HDFS free space available command)

我认为您可能只需要向集群中添加更多节点!

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/30995212

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档