首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >MPI Scatterv错误泛型‘mpi_scatterv’没有特定的子例程

MPI Scatterv错误泛型‘mpi_scatterv’没有特定的子例程
EN

Stack Overflow用户
提问于 2022-11-06 18:41:39
回答 1查看 69关注 0票数 0

我试图运行这段代码,错误说“泛型‘mpi_scatterv’没有特定的子例程

代码语言:javascript
复制
program mpiscatterv
!implicit none
use mpi
real, dimension (:,:), allocatable :: r, rcv_buf
integer :: ierr, my_id, n_proc, rcv_id, snd_id, counter
integer, dimension (:), allocatable :: sendcounts, displs, rcv_count
integer, parameter :: master = 0
integer :: i,j,k
integer :: n = 0
integer :: ios_read = 0
integer :: rem ! remaining data 
integer :: div
integer :: summ = 0




open (unit=99, file ='datatest1.dat',iostat=ios_read)
if (ios_read /=0) then
        print*, 'could not be opened'
end if

!open (unit=99, file='rawdata2.dat',iostat=ios_read)
do
  read (99, *, iostat=ios_read) i,x,y

    if (ios_read > 0) then
        print*,'something is wrong'
        print*,ios_read
        stop
   else if (ios_read < 0) then
        print*, 'end of file is reached'
        exit
   else
        n = n+1
   end if
end do
rewind(99)
open(unit=98, file='rawdata2.dat')
allocate(r(2,n))

do i=1,n
read(99,*, iostat=ios_read)j,x,y
r(1,j)= x
r(2,j)= y
write (98,*) x, y
end do

close (99)
close (98)

call mpi_init(ierr)
call mpi_comm_rank(mpi_comm_world, my_id, ierr)
call mpi_comm_size(mpi_comm_world, n_proc, ierr)

rem = mod(2*n,n_proc)

allocate (sendcounts(n_proc))
allocate (displs(n_proc))
allocate (rcv_count(n_proc))
allocate (rcv_buf(2,n_proc))


counter = 1

do while (counter<=n_proc)
sendcounts(counter) = int(2*n/n_proc)
  if (rem > 0) then
      sendcounts(counter)=int(2*n/n_proc)+2
    rem = rem-2
  end if
rcv_count=sendcounts
displs(counter)=summ
summ=summ+sendcounts(counter)
counter = counter + 1
end do

counter = 1
if (my_id==0) then
   do while (counter<n_proc)
     print*,sendcounts, displs
     counter = counter + 1
   end do
end if


call MPI_Scatterv(r,sendcounts,displs,mpi_real,rcv_buf,rcv_count,mpi_real,0,mpi_comm_world,ierr)

call mpi_finalize(ierr)
end program

我有我想分散的r个数据。R是一个2列和n行。我使用散射,因为数据不能被n_proc分割。当我想要编译它时,它会显示错误,就我所关心的而言,我已经按照我从互联网上得到的有限的指导,任何网站。哪个参数是错误的?

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2022-11-08 17:12:02

您的代码有几个问题。通过使用gfortranv6.3和openMPI v3.1.4编译代码(OpenMPI),我成功地再现了错误。

代码语言:javascript
复制
mpifort main.f90                                                                         
test3.f90:85:106:                                                                                                                           
                                                                                                                                            
 call MPI_Scatterv(r(1,:),sendcounts,displs,mpi_real,rcv_buf(1,:),rcv_count,mpi_real,0,mpi_comm_world,ierr)                                 
                                                                                                          1                                 
Error: There is no specific subroutine for the generic ‘mpi_scatterv’ at (1)

在openMPI的网站上,您可以看到MPI_Scatterv需要以下内容:

代码语言:javascript
复制
Input Parameters

sendbuf
    Address of send buffer (choice, significant only at root). 
sendcounts
    Integer array (of length group size) specifying the number of elements to send to each processor. 
displs
    Integer array (of length group size). Entry i specifies the displacement (relative to sendbuf) from which to take the outgoing data to process i. 
sendtype
    Datatype of send buffer elements (handle). 
recvcount
    Number of elements in receive buffer (integer). 
recvtype
    Datatype of receive buffer elements (handle). 
root
    Rank of sending process (integer). 
comm
    Communicator (handle). 

问题是recvcount (或者在您的例子中是rcv_count)应该只是一个整数。

我要指出的是,还有几个其他问题你应该解决:

正如其他人建议的那样,implicit none

  • xy应该取消注释,当您执行

  • 时,您确实应该避免在MPI

  • 中发送N维数组,您的rcv_buf几乎肯定是错误的。我认为至少应该是n/n_proc

下面的代码编译,但您需要仔细检查。可能不起作用。

代码语言:javascript
复制
program mpiscatterv
  use mpi
  implicit none
  real, dimension (:,:), allocatable :: r, rcv_buf
  integer :: ierr, my_id, n_proc, rcv_id, snd_id, counter
  integer, dimension (:), allocatable :: sendcounts, displs
  integer, parameter :: master = 0
  integer :: i,j,k, rcv_count
  real    :: x, y
  integer :: n = 0
  integer :: ios_read = 0
  integer :: rem ! remaining data 
  integer :: div
  integer :: summ = 0

  open (unit=99, file ='datatest1.dat',iostat=ios_read)
  if (ios_read /=0) then
    print*, 'could not be opened'
  end if

  !open (unit=99, file='rawdata2.dat',iostat=ios_read)
  do
    read (99, *, iostat=ios_read) i,x,y

    if (ios_read > 0) then
      print*,'something is wrong'
      print*,ios_read
      stop
    else if (ios_read < 0) then
      print*, 'end of file is reached'
      exit
    else
      n = n+1
    end if
  end do
  rewind(99)
  open(unit=98, file='rawdata2.dat')
  allocate(r(2,n))

  do i=1,n
    read(99,*, iostat=ios_read)j,x,y
    r(1,j)= x
    r(2,j)= y
    write (98,*) x, y
  end do

  close (99)
  close (98)

  call mpi_init(ierr)
  call mpi_comm_rank(mpi_comm_world, my_id, ierr)
  call mpi_comm_size(mpi_comm_world, n_proc, ierr)

  rem = mod(2*n,n_proc)

  allocate (sendcounts(n_proc))
  allocate (displs(n_proc))
  allocate (rcv_buf(2,n/n_proc))


  counter = 1

  do while (counter<=n_proc)
    sendcounts(counter) = int(2*n/n_proc)
    if (rem > 0) then
      sendcounts(counter)=int(2*n/n_proc)+2
      rem = rem-2
    end if
    displs(counter)=summ
    summ=summ+sendcounts(counter)
    counter = counter + 1
  end do

  counter = 1
  if (my_id==0) then
    do while (counter<n_proc)
      print*,sendcounts, displs
      counter = counter + 1
    end do
  end if


  call MPI_Scatterv(r(1,:),sendcounts,displs,mpi_real,rcv_buf(1,:),rcv_count,mpi_real,0,mpi_comm_world,ierr)

  call mpi_finalize(ierr)
end program
票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/74338720

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档