我的Ubuntu 20.04剂量工作。
我几周前就用过了,效果很好。我几分钟前就想用它了,它没有用。
我先对案件进行分解,然后使用
mpirun -np 4 interFoam -parallel命令。
然后就会出现这条消息。
kai@Kai-Desktop:~/OpenFOAM/kai-7/run/tutorials_of/multiphase/interFoam/laminar/damBreak_stl_II/damBreak$ mpirun -np 4 interFoam -parallel
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems. This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):
ompi_mpi_init: ompi_rte_init failed
--> Returned "(null)" (-43) instead of "Success" (0)
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems. This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):
ompi_mpi_init: ompi_rte_init failed
--> Returned "(null)" (-43) instead of "Success" (0)
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems. This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):
ompi_mpi_init: ompi_rte_init failed
--> Returned "(null)" (-43) instead of "Success" (0)
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems. This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):
ompi_mpi_init: ompi_rte_init failed
--> Returned "(null)" (-43) instead of "Success" (0)
*** An error occurred in MPI_Init_thread
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
*** and potentially your MPI job)
[Kai-Desktop:3304] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
*** An error occurred in MPI_Init_thread
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
*** and potentially your MPI job)
[Kai-Desktop:3305] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
*** An error occurred in MPI_Init_thread
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
*** and potentially your MPI job)
[Kai-Desktop:3306] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
*** An error occurred in MPI_Init_thread
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
*** and potentially your MPI job)
[Kai-Desktop:3307] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[48186,1],0]
Exit code: 1
kai@Kai-Desktop:~/OpenFOAM/kai-7/run/tutorials_of/multiphase/interFoam/laminar/damBreak_stl_II/damBreak$openMPI的版本是这样的:
kai@Kai-Desktop:~/Dokumente$ mpirun --version
mpirun (Open MPI) 4.0.3到目前为止,我创建了一个testfile *.c:
#include
#include
int main(int argc, char** argv) {
// Initialize the MPI environment
MPI_Init(NULL, NULL);
// Get the number of processes
int world_size;
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
// Get the rank of the process
int world_rank;
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
// Get the name of the processor
char processor_name[MPI_MAX_PROCESSOR_NAME];
int name_len;
MPI_Get_processor_name(processor_name, &name_len);
// Print off a hello world message
printf("Hello world from processor %s, rank %d out of %d processors\n",
processor_name, world_rank, world_size);
// Finalize the MPI environment.
MPI_Finalize();
}如果我编译并执行它,这就是答案:
kai@Kai-Desktop:~/Dokumente$ mpirun -np 4 ./hello_world -parallel
Hello world from processor Kai-Desktop, rank 0 out of 4 processors
Hello world from processor Kai-Desktop, rank 1 out of 4 processors
Hello world from processor Kai-Desktop, rank 2 out of 4 processors
Hello world from processor Kai-Desktop, rank 3 out of 4 processors有谁知道该怎么做,以消除这个错误吗?如果您需要进一步的信息,请写信,我会给您的。我不知道系统上任何与mpi相关的更改。我做了一个从18.04到20.04的更新,但我不知道这是否会导致这个错误。
向凯问好
发布于 2020-12-29 15:28:08
我把我的OF v7更新为OF v8,它又起作用了。
还是不知道为什么..。
https://askubuntu.com/questions/1303427
复制相似问题