【问题标题】:C: MPI_Allgather produces errorC: MPI_Allgather 产生错误
【发布时间】:2017-06-04 08:46:54
【问题描述】:

我首先在每个处理器上生成一个随机数。在第二步中,我想将生成的数字发送到其他每个处理器。也就是说,在使用MPI_Allgather 之后,每个处理器都拥有一个包含所有生成的随机数的列表:

#include <stdlib.h>
#include <time.h>
#include <stdio.h>
#include <mpi.h>

int main(int argc, char **argv){

    int nameLen;
    char processorName[MPI_MAX_PROCESSOR_NAME];

    int myrank;           // Rank of processor
    int numprocs;         // Number of processes
    MPI_Init(&argc, &argv);
    MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
    MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
    MPI_Get_processor_name(processorName,&nameLen);
    MPI_Status status;

    time_t t;
    srand((unsigned)time(NULL)+myrank*numprocs+nameLen);

    long c = rand()%100;

    printf("Processor %d has %li particles\n", myrank, c);

    long oldcount[numprocs];

    // Every processor gets the random number of the other processors
    MPI_Allgather(&c, 1, MPI_LONG, &oldcount, numprocs, MPI_LONG, MPI_COMM_WORLD);

    for(int i=0; i<numprocs; i++){
         printf("Processor %d: %d entry of list is %li\n", myrank, i, oldcount[i]);
    }

    MPI_Finalize();
    return 0;
}

此代码产生错误。但为什么?我想我用对了MPI_Allgather

MPI_Allgather(
    void* send_data,
    int send_count,
    MPI_Datatype send_datatype,
    void* recv_data,
    int recv_count,
    MPI_Datatype recv_datatype,
    MPI_Comm communicator)

【问题讨论】:

  • 代码会产生哪个错误?

标签: c mpi


【解决方案1】:

问题在于MPI_Allgatherrecv_count 参数。 MPI 规范说它指定了“从任何进程接收的元素数量”。您正在给出元素的总数。试试

MPI_Allgather(&c, 1, MPI_LONG, &oldcount, 1, MPI_LONG, MPI_COMM_WORLD);

【讨论】:

    猜你喜欢
    • 2012-01-16
    • 2017-10-10
    • 1970-01-01
    • 2017-12-27
    • 1970-01-01
    • 2012-04-10
    • 2022-01-07
    • 2010-11-30
    • 1970-01-01
    相关资源
    最近更新 更多