首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >并行DoE与OpenMDAO中的分布式组件

并行DoE与OpenMDAO中的分布式组件
EN

Stack Overflow用户
提问于 2016-04-12 14:49:40
回答 1查看 129关注 0票数 0

我试图在分布式代码上并行运行一个DoE,这似乎不起作用。下面是一个简单的示例,它会引发与实际代码相同的错误。

代码语言:javascript
复制
import numpy as np

from openmdao.api import IndepVarComp, Group, Problem, Component
from openmdao.core.mpi_wrap import MPI
from openmdao.drivers.latinhypercube_driver import LatinHypercubeDriver

if MPI: 
    from openmdao.core.petsc_impl import PetscImpl as impl 
    rank = MPI.COMM_WORLD.rank
else:
    from openmdao.api import BasicImpl as impl 
    rank = 0



class DistribCompSimple(Component):
    """Uses 2 procs but takes full input vars"""

    def __init__(self, arr_size=2):
        super(DistribCompSimple, self).__init__()

        self._arr_size = arr_size
        self.add_param('invar', 0.)
        self.add_output('outvec', np.ones(arr_size, float))

    def solve_nonlinear(self, params, unknowns, resids):
        if rank == 0:
            unknowns['outvec'] = params['invar'] * np.ones(self._arr_size) * 0.25 
        elif rank == 1:
            unknowns['outvec'] = params['invar'] * np.ones(self._arr_size) * 0.5

        print 'hello from rank', rank, unknowns['outvec']

    def get_req_procs(self):
        return (2, 2)


if __name__ == '__main__':

    N_PROCS = 4

    prob = Problem(impl=impl)
    root = prob.root = Group()

    root.add('p1', IndepVarComp('invar', 0.), promotes=['*'])
    root.add('comp', DistribCompSimple(2), promotes=['*'])

    prob.driver = LatinHypercubeDriver(4, num_par_doe=N_PROCS/2)

    prob.driver.add_desvar('invar', lower=-5.0, upper=5.0)

    prob.driver.add_objective('outvec')

    prob.setup(check=False)
    prob.run()

我是用

代码语言:javascript
复制
mpirun -np 4 python lhc_driver.py

得到这个错误:

代码语言:javascript
复制
Traceback (most recent call last):
  File "lhc_driver.py", line 60, in <module>
    prob.run()
  File "/Users/frza/git/OpenMDAO/openmdao/core/problem.py", line 1064, in run
    self.driver.run(self)
  File "/Users/frza/git/OpenMDAO/openmdao/drivers/predeterminedruns_driver.py", line 157, in run
    self._run_par_doe(problem.root)
  File "/Users/frza/git/OpenMDAO/openmdao/drivers/predeterminedruns_driver.py", line 221, in _run_par_doe
    for case in self._get_case_w_nones(self._distrib_build_runlist()):
  File "/Users/frza/git/OpenMDAO/openmdao/drivers/predeterminedruns_driver.py", line 283, in _get_case_w_nones
    case = next(it)
  File "/Users/frza/git/OpenMDAO/openmdao/drivers/latinhypercube_driver.py", line 119, in _distrib_build_runlist
    run_list = comm.scatter(job_list, root=0)
  File "MPI/Comm.pyx", line 1286, in mpi4py.MPI.Comm.scatter (src/mpi4py.MPI.c:109079)
  File "MPI/msgpickle.pxi", line 707, in mpi4py.MPI.PyMPI_scatter (src/mpi4py.MPI.c:48114)
  File "MPI/msgpickle.pxi", line 161, in mpi4py.MPI.Pickle.dumpv (src/mpi4py.MPI.c:41605)
ValueError: expecting 4 items, got 2

在最新的主版中,我没有看到对这个用例的测试,所以这是否意味着您还不支持它,还是它是一个bug?

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2016-04-12 15:11:34

谢谢你为此提交了一个简单的测试用例。我最近添加了并行DOE组件,并忘记用分布式组件测试它。我将添加一个故事到我们的bug跟踪器,并希望尽快得到解决。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/36576822

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档