首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >当太多工作进程失败时,Dask应用程序失败

当太多工作进程失败时,Dask应用程序失败
EN

Stack Overflow用户
提问于 2019-06-22 00:05:03
回答 1查看 89关注 0票数 0

我正在EMR集群上使用Dask YARN (0.6.0)运行Dask (1.2)应用程序。今天我遇到了这样的情况:我的工作进程失败了(由于HDFS错误),skein.ApplicationMaster会不断地重新创建新的工作进程。如果有太多的工人失败了,有没有办法指示Dask YARN取消申请?

具体地说,我的应用程序主日志如下所示:

代码语言:javascript
复制
19/06/21 16:00:27 INFO skein.ApplicationMaster: RESTARTING: adding new container to replace dask.worker_805.
19/06/21 16:00:27 INFO skein.ApplicationMaster: REQUESTED: dask.worker_806
19/06/21 16:00:27 WARN skein.ApplicationMaster: FAILED: dask.worker_804 - Could not obtain block: BP-1234110000-10.174.17.184-1561122672601:blk_1073741831_1007 file=/user/hadoop/.skein/application_1561122685021_0003/FED3ABF369AAE224B4BB8A3A77120E1C/cached_volume.sqlite3
org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-1234110000-10.174.17.184-1561122672601:blk_1073741831_1007 file=/user/hadoop/.skein/application_1561122685021_0003/FED3ABF369AAE224B4BB8A3A77120E1C/cached_volume.sqlite3
    at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:983)
    at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:642)
    at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:882)
    at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:934)
    at java.io.DataInputStream.read(DataInputStream.java:100)
    at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
    at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59)
    at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:366)
    at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:267)
    at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:63)
    at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:361)
    at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
    at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:358)
    at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

广告无限

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2019-06-25 23:35:12

如果使用主构造函数,您可以使用worker_restarts kwarg设置worker重新启动的最大次数:

代码语言:javascript
复制
# Allow a maximum of 3 worker restarts before failure
cluster = YarnCluster(worker_restarts=3, ...)

或者,如果使用custom specification,则可以使用max_restarts指定允许的最大重启次数。

代码语言:javascript
复制
# /path/to/spec.yaml
name: dask
queue: myqueue

services:
  dask.worker:
    # Don't start any workers initially
    instances: 0
    # A maximum of 3 worker failures are allowed before failure
    max_restarts: 3
    # Restrict workers to 4 GiB and 2 cores each
    resources:
      memory: 4 GiB
      vcores: 2
    # Distribute this python environment to every worker node
    files:
      environment: /path/to/my/environment.tar.gz
    # The bash script to start the worker
    # Here we activate the environment, then start the worker
    script: |
      source environment/bin/activate
      dask-yarn services worker
票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/56706960

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档