我是Condor的新手,正在尝试在Condor上运行我的Python程序,但有一个困难。我找到的所有教程都假定使用一个文件Python程序,但是我的Python程序由多个包和文件组成,并且还使用了其他库,如numpy和scipy。在这种情况下,我如何让Condor运行我的程序?我应该把程序转换成某种可执行文件吗?或者,有没有办法将Python源代码转移到Condor机器中,并使Python on Condor运行源代码?
谢谢,
发布于 2017-05-09 23:32:16
您的作业需要提供完整的python安装(包括SciPy和NumPy)。这涉及到在本地目录中构建python安装(可能在交互式HTCondor作业中),在本地python安装中安装所需的任何库,然后创建包含在transfer_input_files中的安装的tarball。在运行python脚本之前,您必须在作业中使用一个包装器脚本,该脚本可以解压python安装,并将作业指向正确的python可执行文件。
下面是一个集群对如何做到这一点的解释:http://chtc.cs.wisc.edu/python-jobs.shtml
发布于 2020-07-17 09:53:32
顺便说一句。作业现在可以通过HTCondor在Docker容器中执行!
https://research.cs.wisc.edu/htcondor/HTCondorWeek2015/presentations/ThainG_Docker.pdf
使用Docker的另一种选择(我不推荐,但不得不这样做,因为几年前,condor不支持Docker)是利用虚拟环境。我将通过指定一个所有condor节点都可以访问的文件夹来创建Anaconda虚拟环境。然后,在condor中运行的作业需要通过首先激活虚拟环境来为每个作业激活虚拟环境。
发布于 2020-10-22 22:36:16
将condor的正确路径导入顶部的python提交脚本
我真的不明白condor是如何工作的,但似乎一旦我在当前环境中将python的正确路径放在了顶部,它就开始工作了。因此,请检查您的python命令在哪里:
(automl-meta-learning) miranda9~/automl-meta-learning $ which python
~/miniconda3/envs/automl-meta-learning/bin/python然后将其复制粘贴到python提交脚本的顶部:
#!/home/miranda9/miniconda3/envs/automl-meta-learning/bin/python我希望我能把所有这些都包含在job.sub中。如果你知道怎么做,请让我知道。
如果我的提交脚本对您有帮助:
####################
#
# Experiments script
# Simple HTCondor submit description file
#
# reference: https://gitlab.engr.illinois.edu/Vision/vision-gpu-servers/-/wikis/HTCondor-user-guide#submit-jobs
#
# chmod a+x test_condor.py
# chmod a+x experiments_meta_model_optimization.py
# chmod a+x meta_learning_experiments_submission.py
# chmod a+x download_miniImagenet.py
#
# condor_submit -i
# condor_submit job.sub
#
####################
# Executable = meta_learning_experiments_submission.py
# Executable = automl-proj/experiments/meta_learning/meta_learning_experiments_submission.py
# Executable = ~/automl-meta-learning/automl-proj/experiments/meta_learning/meta_learning_experiments_submission.py
Executable = /home/miranda9/automl-meta-learning/automl-proj/experiments/meta_learning/meta_learning_experiments_submission.py
## Output Files
Log = condor_job.$(CLUSTER).log.out
Output = condor_job.$(CLUSTER).stdout.out
Error = condor_job.$(CLUSTER).err.out
# Use this to make sure 1 gpu is available. The key words are case insensitive.
REquest_gpus = 1
# requirements = ((CUDADeviceName = "Tesla K40m")) && (TARGET.Arch == "X86_64") && (TARGET.OpSys == "LINUX") && (TARGET.Disk >= RequestDisk) && (TARGET.Memory >= RequestMemory) && (TARGET.Cpus >= RequestCpus) && (TARGET.gpus >= Requestgpus) && ((TARGET.FileSystemDomain == MY.FileSystemDomain) || (TARGET.HasFileTransfer))
# requirements = (CUDADeviceName == "Tesla K40m")
# requirements = (CUDADeviceName == "Quadro RTX 6000")
requirements = (CUDADeviceName != "Tesla K40m")
# Note: to use multiple CPUs instead of the default (one CPU), use request_cpus as well
Request_cpus = 8
# E-mail option
Notify_user = me@gmail.com
Notification = always
Environment = MY_CONDOR_JOB_ID= $(CLUSTER)
# "Queue" means add the setup until this line to the queue (needs to be at the end of script).
Queue我说我使用python提交脚本,所以让我复制它的顶部:
#!/home/miranda9/miniconda3/envs/automl-meta-learning/bin/python
import torch
import torch.nn as nn
import torch.optim as optim
# import torch.functional as F
from torch.utils.tensorboard import SummaryWriter 我没有提交带参数的bash脚本,参数在我的python脚本中。我不知道如何使用bash,所以这个更适合我。
https://stackoverflow.com/questions/43216514
复制相似问题