首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >python子进程中SLURM作业压缩脚本

python子进程中SLURM作业压缩脚本
EN

Stack Overflow用户
提问于 2022-11-04 21:58:48
回答 1查看 87关注 0票数 0

更新:我能够从SLURM_JOB_ID获得这一行的变量赋值。然而,JOBID=echo ${SLURM\_JOB\_ID}还没有让SLURM_ARRAY_JOB_ID将自己分配给JOBID。

由于需要支持现有的HPC工作流。我需要在python子进程中传递bash脚本。它对openpbs非常有效,现在我需要将它转换为SLURM。我主要是在Ubuntu20.04托管的SLURM中工作,只是没有填充作业数组。下面是一个大大简化为相关内容的代码片段。

我的具体问题是。为什么JOBID=${SLURM_JOB_ID}和JOBID=${SLURM_ARRAY_JOB_ID}行没有得到它们的任务分配?我试过用一本书和各种贬义词,但都没成功。

代码当然可以更干净,这是多个人没有共同标准的结果。

这些都是相关的

Accessing task id for array jobs

Handling bash system variables and slurm environmental variables in a wrapper script

代码语言:javascript
复制
       sbatch_arguments = "#SBATCH --array=1-{}".format(get_instance_count())

       proc = Popen('ssh ${USER}@server_hostname /apps/workflows/slurm_wrapper.sh sbatch', shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE, close_fds=True)
        job_string = """#!/bin/bash -x
        #SBATCH --job-name=%(name)s
        #SBATCH -t %(walltime)s
        #SBATCH --cpus-per-task %(processors)s
        #SBATCH --mem=%(memory)s
        %(sbatch_args)s

        # Assign JOBID
        if [ %(num_jobs)s -eq 1 ]; then
            JOBID=${SLURM_JOB_ID}
        else
            JOBID=${SLURM_ARRAY_JOB_ID}
        fi

        exit ${returnCode}

        """ % ({"walltime": walltime
                ,"processors": total_cores
                ,"binary": self.binary_name
                ,"name": ''.join(x for x in self.binary_name if x.isalnum())
                ,"memory": memory
                ,"num_jobs": self.get_instance_count()
                ,"sbatch_args": sbatch_arguments
                })

        # Send job_string to sbatch
        stdout, stderr = proc.communicate(input=job_string)
EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2022-12-01 00:58:50

跟进这件事。我通过将SBATCH指令作为args传递给sbatch命令来解决这个问题。

代码语言:javascript
复制
    sbatch_args = """--job-name=%(name)s --time=%(walltime)s --partition=defq --cpus-per-task=%(processors)s --mem=%(memory)s""" % (
                    {"walltime": walltime
                    ,"processors": cores
                    ,"name": ''.join(x for x in self.binary_name if x.isalnum())
                    ,"memory": memory
                    })

    # Open a pipe to the sbatch command. {tee /home/ahs/schuec1/_stderr_slurmqueue | sbatch; }
    # The SLURM variables SLURM_ARRAY_* do not exist until after sbatch is called.
    # Popen.communicate has BASH interpret all variables at the same time the script is sent.
    # Because of that, the job array needs to be declared prior to the rest of the BASH script.

    # It seems further that all SBATCH directives are not being evaultated when passed via a string with .communicate
    # due to this, all SBATCH directives will be passed as arguments to the slurm_wrapper.sh as the first command to the Popen pipe.

    proc = Popen('ssh ${USER}@ch3lahpcgw1.corp.cat.com /apps/workflows/slurm_wrapper.sh sbatch %s' % sbatch_args,
    shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE,
    close_fds=True,
    executable='/bin/bash')
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/74323372

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档