首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >电子病历上的电火花,应该设置spark.executor.pyspark.memory和executor.memory吗?

电子病历上的电火花,应该设置spark.executor.pyspark.memory和executor.memory吗?
EN

Stack Overflow用户
提问于 2019-08-05 08:47:34
回答 1查看 1.2K关注 0票数 0

我正在经历一个繁重的重新分区作业(因为数据量很大,而不是为了解决spark所做的事情),并且我一直在运行各种内存错误。

我对启动非常陌生,在很少的研究之后,我终于找到了一种让执行非常快的方法,但是如果我试图重新分区的表太大,我会运行另一个内存错误。

目前,我在EMR中配置它的方式如下,对于9r3.8xLarge:

代码语言:javascript
复制
--executor-cores 11 --executor-memory 180G

我的问题是,我是否也应该设置--conf spark.executor.pyspark.memory?如果是的话,是什么价值?它应该和执行者内存一样吗?

我不能肯定以下几点,但我有一种感觉,当将两者和相同的值放在一起时,它会因java堆错误而崩溃(因此我认为它试图提供太多的RAM)。

正如在评论中被问到的那样,我从电子病历中遇到的最新错误是:

代码语言:javascript
复制
diagnostics: Application application_1564657600123_0004 failed 2 times due to AM Container for appattempt_1564657600123_0004_000002 exited with  exitCode: -104
Failing this attempt.Diagnostics: Container [pid=80943,containerID=container_1564657600123_0004_02_000001] is running beyond physical memory limits. Current usage: 1.4 GB of 1.4 GB physical memory used; 5.1 GB of 6.9 GB virtual memory used. Killing container.
Dump of the process-tree for container_1564657600123_0004_02_000001 :
    |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
    |- 81090 81021 80943 80943 (python) 331 403 1709522944 15143 python emr_interim_aad_ds_conversions_4.py 
    |- 81021 80943 80943 80943 (java) 313384 5420 3625050112 345744 /usr/lib/jvm/java-openjdk/bin/java -server -Xmx1024m -Djava.io.tmpdir=/mnt/yarn/usercache/hadoop/appcache/application_1564657600123_0004/container_1564657600123_0004_02_000001/tmp -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:OnOutOfMemoryError=kill -9 %p -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/containers/application_1564657600123_0004/container_1564657600123_0004_02_000001 org.apache.spark.deploy.yarn.ApplicationMaster --class org.apache.spark.deploy.PythonRunner --primary-py-file emr_interim_aad_ds_conversions_4.py --properties-file /mnt/yarn/usercache/hadoop/appcache/application_1564657600123_0004/container_1564657600123_0004_02_000001/__spark_conf__/__spark_conf__.properties 
    |- 80943 80941 80943 80943 (bash) 1 1 115879936 668 /bin/bash -c LD_LIBRARY_PATH="/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:::/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native::/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native" /usr/lib/jvm/java-openjdk/bin/java -server -Xmx1024m -Djava.io.tmpdir=/mnt/yarn/usercache/hadoop/appcache/application_1564657600123_0004/container_1564657600123_0004_02_000001/tmp '-XX:+UseConcMarkSweepGC' '-XX:CMSInitiatingOccupancyFraction=70' '-XX:MaxHeapFreeRatio=70' '-XX:+CMSClassUnloadingEnabled' '-XX:OnOutOfMemoryError=kill -9 %p' -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/containers/application_1564657600123_0004/container_1564657600123_0004_02_000001 org.apache.spark.deploy.yarn.ApplicationMaster --class 'org.apache.spark.deploy.PythonRunner' --primary-py-file emr_interim_aad_ds_conversions_4.py --properties-file /mnt/yarn/usercache/hadoop/appcache/application_1564657600123_0004/container_1564657600123_0004_02_000001/__spark_conf__/__spark_conf__.properties 1> /var/log/hadoop-yarn/containers/application_1564657600123_0004/container_1564657600123_0004_02_000001/stdout 2> /var/log/hadoop-yarn/containers/application_1564657600123_0004/container_1564657600123_0004_02_000001/stderr 

Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
For more detailed output, check the application tracking page: http://ip-172-31-35-191.eu-west-1.compute.internal:8088/cluster/app/application_1564657600123_0004 Then click on links to logs of each attempt.
. Failing the application.
     ApplicationMaster host: N/A
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1564757608574
     final status: FAILED
     tracking URL: http://ip-172-31-35-191.eu-west-1.compute.internal:8088/cluster/app/application_1564657600123_0004
     user: hadoop
19/08/03 09:44:57 ERROR Client: Application diagnostics message: Application application_1564657600123_0004 failed 2 times due to AM Container for appattempt_1564657600123_0004_000002 exited with  exitCode: -104
Failing this attempt.Diagnostics: Container [pid=80943,containerID=container_1564657600123_0004_02_000001] is running beyond physical memory limits. Current usage: 1.4 GB of 1.4 GB physical memory used; 5.1 GB of 6.9 GB virtual memory used. Killing container.
Dump of the process-tree for container_1564657600123_0004_02_000001 :
    |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
    |- 81090 81021 80943 80943 (python) 331 403 1709522944 15143 python emr_interim_aad_ds_conversions_4.py 
    |- 81021 80943 80943 80943 (java) 313384 5420 3625050112 345744 /usr/lib/jvm/java-openjdk/bin/java -server -Xmx1024m -Djava.io.tmpdir=/mnt/yarn/usercache/hadoop/appcache/application_1564657600123_0004/container_1564657600123_0004_02_000001/tmp -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:OnOutOfMemoryError=kill -9 %p -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/containers/application_1564657600123_0004/container_1564657600123_0004_02_000001 org.apache.spark.deploy.yarn.ApplicationMaster --class org.apache.spark.deploy.PythonRunner --primary-py-file emr_interim_aad_ds_conversions_4.py --properties-file /mnt/yarn/usercache/hadoop/appcache/application_1564657600123_0004/container_1564657600123_0004_02_000001/__spark_conf__/__spark_conf__.properties 
    |- 80943 80941 80943 80943 (bash) 1 1 115879936 668 /bin/bash -c LD_LIBRARY_PATH="/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:::/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native::/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native" /usr/lib/jvm/java-openjdk/bin/java -server -Xmx1024m -Djava.io.tmpdir=/mnt/yarn/usercache/hadoop/appcache/application_1564657600123_0004/container_1564657600123_0004_02_000001/tmp '-XX:+UseConcMarkSweepGC' '-XX:CMSInitiatingOccupancyFraction=70' '-XX:MaxHeapFreeRatio=70' '-XX:+CMSClassUnloadingEnabled' '-XX:OnOutOfMemoryError=kill -9 %p' -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/containers/application_1564657600123_0004/container_1564657600123_0004_02_000001 org.apache.spark.deploy.yarn.ApplicationMaster --class 'org.apache.spark.deploy.PythonRunner' --primary-py-file emr_interim_aad_ds_conversions_4.py --properties-file /mnt/yarn/usercache/hadoop/appcache/application_1564657600123_0004/container_1564657600123_0004_02_000001/__spark_conf__/__spark_conf__.properties 1> /var/log/hadoop-yarn/containers/application_1564657600123_0004/container_1564657600123_0004_02_000001/stdout 2> /var/log/hadoop-yarn/containers/application_1564657600123_0004/container_1564657600123_0004_02_000001/stderr 

Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
For more detailed output, check the application tracking page: http://ip-172-31-35-191.eu-west-1.compute.internal:8088/cluster/app/application_1564657600123_0004 Then click on links to logs of each attempt.
. Failing the application.
Exception in thread "main" org.apache.spark.SparkException: Application application_1564657600123_0004 finished with failed status
    at org.apache.spark.deploy.yarn.Client.run(Client.scala:1148)
    at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1525)
    at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849)
    at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
    at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
    at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
    at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
19/08/03 09:44:57 INFO ShutdownHookManager: Shutdown hook called
19/08/03 09:44:57 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-245f0132-a6e5-4a6d-874f-a71942b1636f
19/08/03 09:44:57 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-db12c471-c7d5-4c86-8cda-bf3246ffb860
Command exiting with ret '1'
EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2019-08-05 09:39:03

尽量增加您的内存,下面是吡火花脚本中的一个配置示例。你可以这样调整你的记忆。

代码语言:javascript
复制
    conf = SparkConf()
    conf.set('spark.dynamicAllocation.enabled', 'false')
    conf.set('spark.yarn.am.memory', '4g')  # As the log showing, you need to increase your AM memory.
    conf.set('spark.yarn.am.cores', '2')
    conf.set('spark.executor.memoryOverhead', '1200')  # The amount of off-heap memory (in megabytes) to be allocated per executor. Overhead Memory is used by container itself.
    conf.set('spark.executor.memory', '2500m')  # memory * instances should less than Node total memory
    conf.set('spark.executor.cores', '4')  # --executor-cores
    conf.set('spark.executor.instances', '8')  # --num-executors

顺便说一下,重新划分是一个很重的操作符。如果必要的话,你可以不用洗牌就可以使用煤块。

票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/57355047

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档