首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >java.lang.OutOfMemoryError: Java堆空间和在进程中运行的作业(本地Hadoop)

java.lang.OutOfMemoryError: Java堆空间和在进程中运行的作业(本地Hadoop)
EN

Stack Overflow用户
提问于 2019-08-09 16:01:56
回答 3查看 3K关注 0票数 3

我的设置: Google平台中的4个节点集群(1个主服务器,3个工作人员)运行NixOS Linux。

我一直在使用TPC工具包生成数据和查询都是标准的。在较小的数据集/更简单的查询上,它们工作得很好。我从这里接受的查询:https://github.com/hortonworks/hive-testbench/tree/hdp3/sample-queries-tpcds

这是第一个,query1.sql

代码语言:javascript
复制
WITH customer_total_return AS 
( 
         SELECT   sr_customer_sk AS ctr_customer_sk , 
                  sr_store_sk    AS ctr_store_sk , 
                  Sum(sr_fee)    AS ctr_total_return 
         FROM     store_returns , 
                  date_dim 
         WHERE    sr_returned_date_sk = d_date_sk 
         AND      d_year =2000 
         GROUP BY sr_customer_sk , 
                  sr_store_sk) 
SELECT   c_customer_id 
FROM     customer_total_return ctr1 , 
         store , 
         customer 
WHERE    ctr1.ctr_total_return > 
         ( 
                SELECT Avg(ctr_total_return)*1.2 
                FROM   customer_total_return ctr2 
                WHERE  ctr1.ctr_store_sk = ctr2.ctr_store_sk) 
AND      s_store_sk = ctr1.ctr_store_sk 
AND      s_state = 'NM' 
AND      ctr1.ctr_customer_sk = c_customer_sk 
ORDER BY c_customer_id limit 100;

一开始,我遇到了一个问题,就是无法成功地运行它,运行到java.lang.OutOfMemoryError: Java heap space中。

我所做的是:

  1. 增加GCP节点功率(最多7.5GB的RAM和双核CPU)
  2. 在Hive中设置这些变量:
代码语言:javascript
复制
set mapreduce.map.memory.mb=2048;
set mapreduce.map.java.opts=-Xmx1024m;
set mapreduce.reduce.memory.mb=4096;
set mapreduce.reduce.java.opts=-Xmxe3072m;
set mapred.child.java.opts=-Xmx1024m;
  1. 重新启动蜂巢

然后,当查询到1GB数据集时(与其他类似的查询一样)起作用。我已经用htop监控了这种情况,内存使用量不超过2gb,而两个CPU核心几乎都被100%地使用着。

现在的问题是,当涉及到具有更大数据集的更复杂查询时,错误再次开始:

查询在一分钟左右的时间内运行良好,但以失败告终。完全堆栈跟踪:

代码语言:javascript
复制
hive> with customer_total_return as
    > (select sr_customer_sk as ctr_customer_sk
    > ,sr_store_sk as ctr_store_sk
    > ,sum(SR_FEE) as ctr_total_return
    > from store_returns
    > ,date_dim
    > where sr_returned_date_sk = d_date_sk
    > and d_year =2000
    > group by sr_customer_sk
    > ,sr_store_sk)
    >  select c_customer_id
    > from customer_total_return ctr1
    > ,store
    > ,customer
    > where ctr1.ctr_total_return > (select avg(ctr_total_return)*1.2
    > from customer_total_return ctr2
    > where ctr1.ctr_store_sk = ctr2.ctr_store_sk)
    > and s_store_sk = ctr1.ctr_store_sk
    > and s_state = 'TN'
    > and ctr1.ctr_customer_sk = c_customer_sk
    > order by c_customer_id
    > limit 100;
No Stats for default@store_returns, Columns: sr_returned_date_sk, sr_fee, sr_store_sk, sr_customer_sk
No Stats for default@date_dim, Columns: d_date_sk, d_year
No Stats for default@store, Columns: s_state, s_store_sk
No Stats for default@customer, Columns: c_customer_sk, c_customer_id
Query ID = root_20190811164854_c253c67c-ef94-4351-b4d3-74ede4c5d990
Total jobs = 14
Stage-29 is selected by condition resolver.
Stage-1 is filtered out by condition resolver.
Stage-30 is selected by condition resolver.
Stage-10 is filtered out by condition resolver.
SLF4J: Found binding in [jar:file:/nix/store/jjm6636r99r0irqa03dc1za9gs2b4fx6-source/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/nix/store/q9jpwzbqbg8k8322q785xfavg0p0v18i-hadoop-3.1.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
Execution completed successfully
MapredLocal task succeeded
SLF4J: Found binding in [jar:file:/nix/store/jjm6636r99r0irqa03dc1za9gs2b4fx6-source/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/nix/store/q9jpwzbqbg8k8322q785xfavg0p0v18i-hadoop-3.1.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Execution completed successfully
MapredLocal task succeeded
Launching Job 3 out of 14
Number of reduce tasks is set to 0 since there's no reduce operator
Job running in-process (local Hadoop)
2019-08-11 16:49:19,415 Stage-20 map = 0%,  reduce = 0%
2019-08-11 16:49:22,418 Stage-20 map = 100%,  reduce = 0%
Ended Job = job_local404291246_0005
Launching Job 4 out of 14
Number of reduce tasks is set to 0 since there's no reduce operator
Job running in-process (local Hadoop)
2019-08-11 16:49:24,718 Stage-22 map = 0%,  reduce = 0%
2019-08-11 16:49:27,721 Stage-22 map = 100%,  reduce = 0%
Ended Job = job_local566999875_0006
Launching Job 5 out of 14
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Job running in-process (local Hadoop)
2019-08-11 16:49:29,958 Stage-2 map = 0%,  reduce = 0%
2019-08-11 16:49:33,970 Stage-2 map = 100%,  reduce = 0%
2019-08-11 16:49:35,974 Stage-2 map = 100%,  reduce = 100%
Ended Job = job_local1440279093_0007
Launching Job 6 out of 14
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Job running in-process (local Hadoop)
2019-08-11 16:49:37,235 Stage-11 map = 0%,  reduce = 0%
2019-08-11 16:49:40,421 Stage-11 map = 100%,  reduce = 0%
2019-08-11 16:49:42,424 Stage-11 map = 100%,  reduce = 100%
Ended Job = job_local1508103541_0008
SLF4J: Found binding in [jar:file:/nix/store/jjm6636r99r0irqa03dc1za9gs2b4fx6-source/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/nix/store/q9jpwzbqbg8k8322q785xfavg0p0v18i-hadoop-3.1.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

2019-08-11 16:49:51 Dump the side-table for tag: 1 with group count: 21 into file: file:/tmp/root/3ab30b3b-380d-40f5-9f72-68788d998013/hive_2019-08-11_16-48-54_393_105456265244058313-1/-local-10019/HashTable-Stage-19/MapJoin-mapfile71--.hashtable
Execution completed successfully
MapredLocal task succeeded
Launching Job 7 out of 14
Number of reduce tasks is set to 0 since there's no reduce operator
Job running in-process (local Hadoop)
2019-08-11 16:49:53,956 Stage-19 map = 100%,  reduce = 0%
Ended Job = job_local2121921517_0009
Stage-26 is filtered out by condition resolver.
Stage-27 is selected by condition resolver.
Stage-4 is filtered out by condition resolver.

2019-08-11 16:50:01 Dump the side-table for tag: 0 with group count: 99162 into file: file:/tmp/root/3ab30b3b-380d-40f5-9f72-68788d998013/hive_2019-08-11_16-48-54_393_105456265244058313-1/-local-10017/HashTable-Stage-17/MapJoin-mapfile60--.hashtable
2019-08-11 16:50:02 Uploaded 1 File to: file:/tmp/root/3ab30b3b-380d-40f5-9f72-68788d998013/hive_2019-08-11_16-48-54_393_105456265244058313-1/-local-10017/HashTable-Stage-17/MapJoin-mapfile60--.hashtable (2832042 bytes)
Execution completed successfully
MapredLocal task succeeded
Launching Job 9 out of 14
Number of reduce tasks is set to 0 since there's no reduce operator
Job running in-process (local Hadoop)
2019-08-11 16:50:04,004 Stage-17 map = 0%,  reduce = 0%
2019-08-11 16:50:05,005 Stage-17 map = 100%,  reduce = 0%
Ended Job = job_local694362009_0010
Stage-24 is selected by condition resolver.
Stage-25 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.

SLF4J: Found binding in [jar:file:/nix/store/q9jpwzbqbg8k8322q785xfavg0p0v18i-hadoop-3.1.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
2019-08-11 16:50:12 Starting to launch local task to process map join;  maximum memory = 239075328
Execution completed successfully
MapredLocal task succeeded
Launching Job 11 out of 14
Number of reduce tasks is set to 0 since there's no reduce operator
Job running in-process (local Hadoop)
2019-08-11 16:50:14,254 Stage-13 map = 100%,  reduce = 0%
Ended Job = job_local1812693452_0011
Launching Job 12 out of 14
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Job running in-process (local Hadoop)
2019-08-11 16:50:15,481 Stage-6 map = 0%,  reduce = 0%
Ended Job = job_local920309638_0012 with errors
Error during job, obtaining debugging information...
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched: 
Stage-Stage-20:  HDFS Read: 8662606197 HDFS Write: 0 SUCCESS
Stage-Stage-22:  HDFS Read: 9339349675 HDFS Write: 0 SUCCESS
Stage-Stage-2:  HDFS Read: 9409277766 HDFS Write: 0 SUCCESS
Stage-Stage-11:  HDFS Read: 9409277766 HDFS Write: 0 SUCCESS
Stage-Stage-19:  HDFS Read: 4704638883 HDFS Write: 0 SUCCESS
Stage-Stage-17:  HDFS Read: 4771516428 HDFS Write: 0 SUCCESS
Stage-Stage-13:  HDFS Read: 4771516428 HDFS Write: 0 SUCCESS
Stage-Stage-6:  HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec

hive.log文件中的问题仍然是一样的:

代码语言:javascript
复制
java.lang.Exception: java.lang.OutOfMemoryError: Java heap space

我意识到我的工作节点实际上什么也不做(htop显示,它们是空闲的,而只有主节点在工作),甚至在堆栈跟踪中:

代码语言:javascript
复制
Job running in-process (local Hadoop)

我如何使Hive使用HDFS而不仅仅是本地Hadoop?运行hdfs dfs -df -h hdfs:<redacted>:9000/返回

代码语言:javascript
复制
Filesystem                   Size    Used  Available  Use%
hdfs://<redacted>:9000  88.5 G  34.3 G     35.2 G   39%

这是正确的,我有3个工作节点,每个有30 GB的磁盘。

EN

回答 3

Stack Overflow用户

发布于 2019-08-13 06:12:11

如果您试图在单机上推送过多的数据,java.lang.OutOfMemoryError: Java heap space就会发生。

根据提供的查询,您可以尝试以下几种方法:

  1. 将联接条件更改为显式(删除WHERE子句并使用内/左联接)。例如:
代码语言:javascript
复制
FROM     customer_total_return ctr1 
         INNER JOIN store s
             ON ctr1.ctr_store_sk = s.s_store_sk
                AND s_state = 'NM'
         INNER JOIN customer c
             ON ctr1.ctr_customer_sk = c.c_customer_sk
  1. 检查是否为下列字段之一扭曲了数据:
    1. store_returns -> sr_returned_date_sk
    2. store_returns -> sr_store_sk
    3. store_returns -> sr_customer_sk
    4. 客户-> c_customer_sk
    5. -> s_store_sk商店

可能其中一个键的值百分比很高,这可能会导致一个节点被重载(当数据大小很大时)。

基本上,您正在尝试消除节点重载的可能原因。

如果有帮助请告诉我。

票数 1
EN

Stack Overflow用户

发布于 2019-08-10 09:46:55

可能是资源问题。Hive查询在内部作为Map-Reduce作业执行。您可以检查“Hive Map”的作业历史记录日志--“减少作业”失败。有时,从shell执行查询比Hive-Query编辑器更快。

票数 0
EN

Stack Overflow用户

发布于 2019-08-15 19:40:02

OOM问题大部分时间都与查询性能有关。

这里有两个查询:

第1部分:

代码语言:javascript
复制
WITH customer_total_return AS 

( 
         SELECT   sr_customer_sk AS ctr_customer_sk , 
                  sr_store_sk    AS ctr_store_sk , 
                  Sum(sr_fee)    AS ctr_total_return 
         FROM     store_returns , 
                  date_dim 
         WHERE    sr_returned_date_sk = d_date_sk 
         AND      d_year =2000 
         GROUP BY sr_customer_sk , 
                  sr_store_sk)

第2部分:

代码语言:javascript
复制
SELECT   c_customer_id 
FROM     customer_total_return ctr1 , 
         store , 
         customer 
WHERE    ctr1.ctr_total_return > 
         ( 
                SELECT Avg(ctr_total_return)*1.2 
                FROM   customer_total_return ctr2 
                WHERE  ctr1.ctr_store_sk = ctr2.ctr_store_sk) 

AND      s_store_sk = ctr1.ctr_store_sk 
AND      s_state = 'NM' 
AND      ctr1.ctr_customer_sk = c_customer_sk 
ORDER BY c_customer_id limit 100;

尝试为单元集群链接启用JMX

并查看查询的两个部分的内存使用情况。以及part2内部查询。

很少有针对上述查询的单元优化可以尝试:

  1. 使用orders而不是ORDER BY子句-> 子句,该子句只对每个减速器内的数据进行排序。
  2. 将联接键上的表划分为只读取特定数据,而不是整个表扫描。
  3. 在分布式缓存中缓存小蜂窝表,并使用映射端连接来减少洗牌,例如:

select /*+MAPJOIN(b)*/ col1,col2,col3,col4 from table_A a join table_B b on a.account_number=b.account_number

  1. 如果在任何一个表中都可能存在数据倾斜的情况,那么使用以下参数:

set hive.optimize.skewjoin=true; set hive.skewjoin.key=100000; (即数据的阈值应该转到一个节点)

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/57433675

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档