在与mpi4py并行化的脚本中,对subprocess.Popen()的重复串行调用最终会导致通信期间的数据损坏,表现为各种类型的pickle.unpickling错误(我见过取消筛选错误: EOF、无效的unicode字符、无效的加载键、取消筛选堆栈下溢)。似乎只有当通信的数据量很大、对子进程的串行调用数量很大或mpi进程数量很大时,才会发生这种情况。
我可以用python>=2.7、mpi4py>=3.0.1和openmpi>=3.0.0重现这个错误。我最终想要与python对象通信,所以我使用小写的mpi4py方法。以下是重现错误的最小代码:
#!/usr/bin/env python
from mpi4py import MPI
from copy import deepcopy
import subprocess
nr_calcs = 4
tasks_per_calc = 44
data_size = 55000
# --------------------------------------------------------------------
def run_test(nr_calcs, tasks_per_calc, data_size):
# Init MPI
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
comm_size = comm.Get_size()
# Run Moc Calcs
icalc = 0
while True:
if icalc > nr_calcs - 1: break
index = icalc
icalc += 1
# Init Moc Tasks
task_list = []
moc_task = data_size*"x"
if rank==0:
task_list = [deepcopy(moc_task) for i in range(tasks_per_calc)]
task_list = comm.bcast(task_list)
# Moc Run Tasks
itmp = rank
while True:
if itmp > len(task_list)-1: break
itmp += comm_size
proc = subprocess.Popen(["echo", "TEST CALL TO SUBPROCESS"],
stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=False)
out,err = proc.communicate()
print("Rank {:3d} Finished Calc {:3d}".format(rank, index))
# --------------------------------------------------------------------
if __name__ == '__main__':
run_test(nr_calcs, tasks_per_calc, data_size)在一个具有44个mpi进程的44核节点上运行此命令,可以成功完成前3个“计算”,但在最后一个循环中,一些进程会抛出:
Traceback (most recent call last):
File "./run_test.py", line 54, in <module>
run_test(nr_calcs, tasks_per_calc, data_size)
File "./run_test.py", line 39, in run_test
task_list = comm.bcast(task_list)
File "mpi4py/MPI/Comm.pyx", line 1257, in mpi4py.MPI.Comm.bcast
File "mpi4py/MPI/msgpickle.pxi", line 639, in mpi4py.MPI.PyMPI_bcast
File "mpi4py/MPI/msgpickle.pxi", line 111, in mpi4py.MPI.Pickle.load
File "mpi4py/MPI/msgpickle.pxi", line 101, in mpi4py.MPI.Pickle.cloads
_pickle.UnpicklingError有时UnpicklingError有一个描述符,比如无效的加载键"x",或EOF错误,无效的unicode字符,或取消酸洗堆栈下溢。
编辑:使用openmpi<3.0.0和mvapich2似乎可以解决这个问题,但了解发生了什么仍然是很好的。
发布于 2019-10-28 23:57:12
我也有同样的问题。在我的例子中,我是通过在Python虚拟环境中安装mpi4py并按照英特尔的建议设置mpi4py.rc.recv_mprobe = False来运行代码的:https://software.intel.com/en-us/articles/python-mpi4py-on-intel-true-scale-and-omni-path-clusters
但是,最后我只是将大写字母方法Recv和Send与NumPy数组一起使用。它们可以很好地与subprocess配合使用,并且不需要任何额外的技巧。
https://stackoverflow.com/questions/57936912
复制相似问题