首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >C,Open MPI:调用MPI_Finalize()时出现分段错误。段错误并不总是发生,特别是在进程数量较少的情况下

C,Open MPI:调用MPI_Finalize()时出现分段错误。段错误并不总是发生,特别是在进程数量较少的情况下
EN

Stack Overflow用户
提问于 2012-05-02 09:01:46
回答 2查看 3.7K关注 0票数 4

我正在编写一段简单的代码,以了解如何定义MPI_Datatype并将其与MPI_Gatherv结合使用。我想确保我可以在一个进程上组合可变长度的、动态分配的结构化数据数组,在我调用MPI_Finalize()之前,这似乎工作得很好。我已经确认,通过使用print语句和Eclipse PTP调试器(后端是gdb-mi),这就是问题开始显现的地方。我的主要问题是,我如何才能摆脱分割错误?

并不是每次运行代码时都会出现段错误。例如,在2到3个进程中不会发生这种情况,但当我运行大约4个或更多的进程时,这种情况往往会定期发生。

而且,当我使用valgrind运行这段代码时,分段错误没有发生。然而,我确实从valgrind得到了错误消息,尽管当我使用MPI函数时,即使有大量有针对性的抑制,输出对我来说也很难理解。我还担心,如果我使用更多的抑制,我将使有用的错误消息静默。

我使用这些标志编译普通代码,因此在两种情况下都使用C99标准:-ansi -pedantic帧指针-march=barcelona -fomit- -Wall - -std=c99和调试代码:-ansi -pedantic -std=c99 -Wall -g

两者都使用gcc 4.4mpicc编译器,并在使用Red Hat Linux with Open MPI v1.4.5的集群上运行。如果我遗漏了其他重要的信息,请告诉我。这里是代码,并提前感谢:

代码语言:javascript
复制
//#include <unistd.h>
#include <string.h>
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
//#include <limits.h>

#include "mpi.h"

#define FULL_PROGRAM        1

struct CD{

    int int_ID;
    double dbl_ID;
};

int main(int argc, char *argv[]) {

    int numprocs, myid, ERRORCODE;

#if FULL_PROGRAM
    struct CD *myData=NULL;             //Each process contributes an array of data, comprised of 'struct CD' elements
    struct CD *allData=NULL;            //root will dynamically allocate this array to store all the data from rest of the processes
    int *p_lens=NULL, *p_disp=NULL;     //p_lens stores the number of elements in each process' array, p_disp stores the displacements in bytes
    int MPI_CD_size;                    //stores the size of the MPI_Datatype that is defined to allow communication operations using 'struct CD' elements

    int mylen, total_len=0;             //mylen should be the length of each process' array
                                        //MAXlen is the maximum allowable array length
                                        //total_len will be the sum of mylen across all processes

    // ============ variables related to defining new MPI_Datatype at runtime ====================================================
    struct CD sampleCD = {.int_ID=0, .dbl_ID=0.0};
    int blocklengths[2];                //this describes how many blocks of identical data types will be in the new MPI_Datatype
    MPI_Aint offsets[2];                //this stores the offsets, in bytes(bits?), of the blocks from the 'start' of the datatype
    MPI_Datatype block_types[2];        //this stores which built-in data types the blocks are comprised of
    MPI_Datatype  myMPI_CD;             //just the name of the new datatype
    MPI_Aint myStruct_address, int_ID_address, dbl_ID_address, int_offset, dbl_offset;  //useful place holders for filling the arrays above
    // ===========================================================================================================================
#endif
    // =================== Initializing MPI functionality ============================
    MPI_Init(&argc, &argv);
    MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
    MPI_Comm_rank(MPI_COMM_WORLD, &myid);
    // ===============================================================================
#if FULL_PROGRAM
    // ================== This part actually formally defines the MPI datatype ===============================================
    MPI_Get_address(&sampleCD, &myStruct_address);          //starting point of struct CD
    MPI_Get_address(&sampleCD.int_ID, &int_ID_address);     //starting point of first entry in CD
    MPI_Get_address(&sampleCD.dbl_ID, &dbl_ID_address);     //starting point of second entry in CD
    int_offset = int_ID_address - myStruct_address;         //offset from start of first to start of CD
    dbl_offset = dbl_ID_address - myStruct_address;         //offset from start of second to start of CD

    blocklengths[0]=1;  blocklengths[1]=1;                  //array telling it how many blocks of identical data types there are, and the number of entries in each block
    //This says there are two blocks of identical data-types, and both blocks have only one variable in them

    offsets[0]=int_offset;  offsets[1]=dbl_offset;          //the first block starts at int_offset, the second block starts at dbl_offset (from 'myData_address'

    block_types[0]=MPI_INT; block_types[1]=MPI_DOUBLE;      //the first block contains MPI_INT, the second contains MPI_DOUBLE

    MPI_Type_create_struct(2, blocklengths, offsets, block_types, &myMPI_CD);       //this uses the above arrays to define the MPI_Datatype...an MPI-2 function

    MPI_Type_commit(&myMPI_CD);     //this is the final step to defining/reserving the data type
    // ========================================================================================================================

    mylen   = myid*2;       //each process is told how long its array should be...I used to define that randomly but that just makes things messier

    p_lens  = (int*)        calloc((size_t)numprocs,    sizeof(int));       //allocate memory for the number of elements (p_lens) and offsets from the start of the recv buffer(d_disp)
    p_disp  = (int*)        calloc((size_t)numprocs,    sizeof(int));

    myData  = (struct CD*)  calloc((size_t)mylen,       sizeof(struct CD));         //allocate memory for each process' array
    //if mylen==0, 'a unique pointer to the heap is returned'

    if(!p_lens) {   MPI_Abort(MPI_COMM_WORLD, 1); exit(EXIT_FAILURE);   }
    if(!p_disp) {   MPI_Abort(MPI_COMM_WORLD, 1); exit(EXIT_FAILURE);   }
    if(!myData) {   MPI_Abort(MPI_COMM_WORLD, 1); exit(EXIT_FAILURE);   }


    for(double temp=0.0;temp<1e6;++temp) temp += exp(-10.0);
    MPI_Barrier(MPI_COMM_WORLD);                                //purely for keeping the output organized by give a delay in time

    for (int k=0; k<numprocs; ++k) {

        if(myid==k) {

            //printf("\t ID %d has %d entries: { ", myid, mylen);

            for(int i=0; i<mylen; ++i) {

                myData[i]= (struct CD) {.int_ID=myid*(i+1), .dbl_ID=myid*(i+1)};            //fills data elements with simple pattern
                //printf("%d: (%d,%lg) ", i, myData[i].int_ID, myData[i].dbl_ID);
            }
            //printf("}\n");
        }
    }

    for(double temp=0.0;temp<1e6;++temp) temp += exp(-10.0);
    MPI_Barrier(MPI_COMM_WORLD);                            //purely for keeping the output organized by give a delay in time

    MPI_Gather(&mylen,  1, MPI_INT, p_lens, 1, MPI_INT, 0, MPI_COMM_WORLD);     //Each process sends root the length of the vector they'll be sending

#if 1
    MPI_Type_size(myMPI_CD, &MPI_CD_size);          //gets the size of the MPI_Datatype for p_disp
#else
    MPI_CD_size = sizeof(struct CD);                //using this doesn't change things too much...
#endif

    for(int j=0;j<numprocs;++j) {

        total_len += p_lens[j];

        if (j==0)   {   p_disp[j] = 0;                                      }
        else        {   p_disp[j] = p_disp[j-1] + p_lens[j]*MPI_CD_size;    }
    }

    if (myid==0)    {

        allData = (struct CD*)  calloc((size_t)total_len,   sizeof(struct CD));     //allocate array
        if(!allData)    {   MPI_Abort(MPI_COMM_WORLD, 1); exit(EXIT_FAILURE);   }
    }

    MPI_Gatherv(myData, mylen, myMPI_CD, allData, p_lens, p_disp, myMPI_CD, 0, MPI_COMM_WORLD); //each array sends root process their array, which is stored in 'allData'

    // ============================== OUTPUT CONFIRMING THAT COMMUNICATIONS WERE SUCCESSFUL=========================================
    if(myid==0) {

        for(int i=0;i<numprocs;++i) {
            printf("\n\tElements from %d on MASTER are: { ",i);
            for(int k=0;k<p_lens[i];++k)    {   printf("%d: (%d,%lg) ", k, (allData+p_disp[i]+k)->int_ID, (allData+p_disp[i]+k)->dbl_ID);   }

            if(p_lens[i]==0) printf("NOTHING ");
            printf("}\n");
        }
        printf("\n");       //each data element should appear as two identical numbers, counting upward by the process ID
    }
    // ==========================================================================================================

    if (p_lens) {   free(p_lens);   p_lens=NULL;    }       //adding this in didn't get rid of the MPI_Finalize seg-fault
    if (p_disp) {   free(p_disp);   p_disp=NULL;    }
    if (myData) {   free(myData);   myData=NULL;    }
    if (allData){   free(allData);  allData=NULL;   }       //the if statement ensures that processes not allocating memory for this pointer don't free anything

    for(double temp=0.0;temp<1e6;++temp) temp += exp(-10.0);
    MPI_Barrier(MPI_COMM_WORLD);                            //purely for keeping the output organized by give a delay in time
    printf("ID %d: I have reached the end...before MPI_Type_free!\n", myid);

    // ====================== CLEAN UP ================================================================================
    ERRORCODE = MPI_Type_free(&myMPI_CD);           //this frees the data type...not always necessary, but a good practice

    for(double temp=0.0;temp<1e6;++temp) temp += exp(-10.0);
    MPI_Barrier(MPI_COMM_WORLD);                                //purely for keeping the output organized by give a delay in time

    if(ERRORCODE!=MPI_SUCCESS)  {   printf("ID %d...MPI_Type_free was not successful\n", myid); MPI_Abort(MPI_COMM_WORLD, 911); exit(EXIT_FAILURE); }
    else                        {   printf("ID %d...MPI_Type_free was successful, entering MPI_Finalize...\n", myid);       }
#endif
    ERRORCODE=MPI_Finalize();

    for(double temp=0.0;temp<1e7;++temp) temp += exp(-10.0);        //NO MPI_Barrier AFTER MPI_Finalize!

    if(ERRORCODE!=MPI_SUCCESS)  {   printf("ID %d...MPI_Finalize was not successful\n", myid);  MPI_Abort(MPI_COMM_WORLD, 911); exit(EXIT_FAILURE); }
    else                        {   printf("ID %d...MPI_Finalize was successful\n", myid);      }

    return EXIT_SUCCESS;
}
EN

回答 2

Stack Overflow用户

回答已采纳

发布于 2012-05-02 23:45:06

K上的外部循环是假的,但在技术上不是错误的--它只是没有用。

真正的问题是你对MPI_GATHERV的替换是错误的。如果你运行valgrind,你会看到类似这样的东西:

代码语言:javascript
复制
==28749== Invalid write of size 2
==28749==    at 0x4A086F4: memcpy (mc_replace_strmem.c:838)
==28749==    by 0x4C69614: unpack_predefined_data (datatype_unpack.h:41)
==28749==    by 0x4C6B336: ompi_generic_simple_unpack (datatype_unpack.c:418)
==28749==    by 0x4C7288F: ompi_convertor_unpack (convertor.c:314)
==28749==    by 0x8B295C7: mca_pml_ob1_recv_frag_callback_match (pml_ob1_recvfrag.c:216)
==28749==    by 0x935723C: mca_btl_sm_component_progress (btl_sm_component.c:426)
==28749==    by 0x51D4F79: opal_progress (opal_progress.c:207)
==28749==    by 0x8B225CA: opal_condition_wait (condition.h:99)
==28749==    by 0x8B22718: ompi_request_wait_completion (request.h:375)
==28749==    by 0x8B231E1: mca_pml_ob1_recv (pml_ob1_irecv.c:104)
==28749==    by 0x955E7A7: mca_coll_basic_gatherv_intra (coll_basic_gatherv.c:85)
==28749==    by 0x9F7CBFA: mca_coll_sync_gatherv (coll_sync_gatherv.c:46)
==28749==  Address 0x7b1d630 is not stack'd, malloc'd or (recently) free'd

说明MPI_GATHERV以某种方式得到了错误的信息。

(不幸的是,还有其他valgrind警告来自Open MPI内部的libltdl --这是libltdl中的一个bug,另一个来自PLPA,不幸的是,这也是不可避免的,因为它故意这样做的原因在这里并不有趣)

看着你的位移计算,我发现

代码语言:javascript
复制
    total_len += p_lens[j];                                                              

    if (j == 0) {                                                                        
        p_disp[j] = 0;                                                                   
    } else {                                                                             
        p_disp[j] = p_disp[j - 1] + p_lens[j] * MPI_CD_size;                             
    }                                                                                    

但是MPI聚集位移是以数据类型为单位的,而不是字节。所以它真的应该是:

代码语言:javascript
复制
p_disp[j] = total_len;
total_len += p_lens[j];

进行此更改使MPI_GATHERV valgrind警告对我来说消失了。

票数 3
EN

Stack Overflow用户

发布于 2012-05-02 23:27:42

这个外部的on 'k‘循环是假的。它的主体只对k=myid执行(对于每个正在运行的进程,这是一个常量)。在循环中永远不会引用k(除了与几乎不变的myid进行比较)。而且,使用mylen = myid*2;这条线也不受欢迎。我建议您将其更改为常量。

代码语言:javascript
复制
 for (int k=0; k<numprocs; ++k) {

    if(myid==k) {

        //printf("\t ID %d has %d entries: { ", myid, mylen);

        for(int i=0; i<mylen; ++i) {

            myData[i]= (struct CD) {.int_ID=myid*(i+1), .dbl_ID=myid*(i+1)};            //fills data elements with simple pattern
            //printf("%d: (%d,%lg) ", i, myData[i].int_ID, myData[i].dbl_ID);
        }
        //printf("}\n");
    }
}

,因此(给定myid在0和numprocs之间),整个愚蠢的构造可以简化为:

代码语言:javascript
复制
for(int i=0; i<mylen; ++i) {
        myData[i].int_ID=myid*(i+1);
        myData[i].dbl_ID=myid*(i+1);  
        }
票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/10406438

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档