【发布时间】:2017-07-17 09:00:53
【问题描述】:
在从所有其他进程接收到其所有组件后,我试图从我的主进程打印一个动态分配的二维数组。组件是指子数组或块。
我已经使代码对进程数通用。下图将帮助您了解块在完整阵列中的排列方式。每个块由一个进程处理。只是在这里,让我们假设我使用 12 个进程(本来我有 8 个内核)运行程序,使用命令:
mpiexec -n 12 ./gather2dArray
这是图表,专门针对 12 个流程场景:
乔纳森在question 中的回答对我帮助很大,但不幸的是我无法完全实现我想要的。
我首先在每个进程中创建块,我将它们命名为grid。每个数组都是动态分配的二维数组。我还创建了全局数组 (universe),仅对 master 进程 (#0) 可见。
最后我必须使用MPI_Gatherv(...) 将所有子数组组装到全局数组中。然后我继续显示本地数组和全局数组。
当我使用上面的命令运行程序时,当我到达MPI_Gatherv(...) 函数时出现分段错误。我无法弄清楚我做错了什么。我在下面提供了完整的代码(大量注释):
编辑
我已经修复了代码中的一些错误。现在MPI_Gatherv() 有点成功。我能够正确打印全局数组的整个第一行(我检查进程的各个元素并且它们总是匹配的)。但是当我到达第二行时,出现了一些象形文字,最后出现了分段错误。我一直无法弄清楚那里出了什么问题。还在研究中。。
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
#include <time.h>
void print2dCharArray(char** array, int rows, int columns);
int main(int argc, char** argv)
{
int master = 0, np, rank;
char version[10];
char processorName[20];
int strLen[10];
// Initialize MPI environment
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &np);
if (np != 12) { MPI_Abort(MPI_COMM_WORLD,1); }
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
// We need a different seed for each process
srand(time(0) ^ (rank * 33 / 4));
int nDims = 2; // array dimensions
int rows = 4, columns = 6; // rows and columns of each block
int prows = 3, pcolumns = 4; // rows and columns of blocks. Each block is handled by 1 process
char** grid = malloc(rows * sizeof(char*));
for (int i = 0; i < rows; i++)
grid[i] = malloc(columns * sizeof(char));
char** universe = NULL; // Global array
char* recvPtr; // Pointer to start of Global array
int Rows = rows * prows; // Global array rows
int Columns = columns * pcolumns; // Global array columns
int sizes[2]; // No of elements in each dimension of the whole array
int subSizes[2]; // No of elements in each dimension of the subarray
int startCoords[2]; // Starting coordinates of each subarray
MPI_Datatype recvBlock, recvMagicBlock;
if (rank == master){ // For the master's eyes only
universe = malloc(Rows * sizeof(char*));
for (int i = 0; i < Rows; i++)
universe[i] = malloc(Columns * sizeof(char));
// Create a subarray (a rectangular block) datatype from a regular, 2d array
sizes[0] = Rows;
sizes[1] = Columns;
subSizes[0] = rows;
subSizes[1] = columns;
startCoords[0] = 0;
startCoords[1] = 0;
MPI_Type_create_subarray(nDims, sizes, subSizes, startCoords, MPI_ORDER_C, MPI_CHAR, &recvBlock);
// Now modify the newly created datatype to fit our needs, by specifying
// (lower bound remains the same = 0)
// - new extent
// The new region / block will now "change" sooner, as soon as we reach a region of elements
// occupied by a new block, ie. every: (columns) * sizeof(elementType) =
MPI_Type_create_resized(recvBlock, 0, columns * sizeof(char), &recvMagicBlock);
MPI_Type_commit(&recvMagicBlock);
recvPtr = &universe[0][0];
}
// populate arrays
for (int y = 0; y < rows; y++){
for (int x = 0; x < columns; x++){
if (( (double) rand() / RAND_MAX) <= density)
grid[y][x] = '#';
else
grid[y][x] = '.';
}
}
// display local array
for (int i = 0; i < np; i++){
if (i == rank) {
printf("\n[Rank] of [total]: No%d of %d\n", rank, np);
print2dCharArray(grid, rows, columns);
}
MPI_Barrier(MPI_COMM_WORLD);
}
/* MPI_Gathering.. */
int recvCounts[np], displacements[np];
// recvCounts: how many chunks of data each process has -- in units of blocks here --
for (int i = 0; i < np; i++)
recvCounts[i] = 1;
// prows * pcolumns = np
// displacements: displacement relative to global buffer (universe) at which to place the
// incoming data block from process i -- in block extents! --
int index = 0;
for (int p_row = 0; p_row < prows; p_row++)
for (int p_column = 0; p_column < pcolumns; p_column++)
displacements[index++] = p_column + p_row * (rows * pcolumns);
// MPI_Gatherv(...) is a collective routine
// Gather the local arrays to the global array in the master process
// send type: MPI_CHAR (a char)
// recv type: recvMagicBlock (a block)
MPI_Gatherv(&grid[0][0], rows * columns, MPI_CHAR, //: parameters relevant to sender
recvPtr, recvCounts, displacements, recvMagicBlock, master, //: parameters relevant to receiver
MPI_COMM_WORLD);
// display global array
MPI_Barrier(MPI_COMM_WORLD);
if (rank == master){
printf("\n---Global Array---\n");
print2dCharArray(universe, Rows, Columns);
}
MPI_Finalize();
return 0;
}
void print2dCharArray(char** array, int rows, int columns)
{
int i, j;
for (i = 0; i < rows; i++){
for (j = 0; j < columns; j++){
printf("%c ", array[i][j]);
}
printf("\n");
}
fflush(stdout);
}
以下是我得到的输出。无论我尝试什么,我都无法克服这一点。如您所见,使用 4 个进程的前 4 个块正确打印了全局数组的第一行。当跳转到下一行时,我们会得到象形文字..
hostname@User:~/mpi$ mpiexec -n 12 ./gather2darray
MPICH Version: 3User
Processor name: User
[Rank] of [total]: No0 of 12
. . # . . #
# . # # # .
. . . # # .
. . # . . .
[Rank] of [total]: No1 of 12
. . # # . .
. . . . # #
. # . . # .
. . # . . .
[Rank] of [total]: No2 of 12
. # # # . #
. # . . . .
# # # . . .
. . . # # .
[Rank] of [total]: No3 of 12
. . # # # #
. . # # . .
# . # . # .
. . . # . .
[Rank] of [total]: No4 of 12
. # . . . #
# . # . # .
# . . . . .
# . . . . .
[Rank] of [total]: No5 of 12
# # . # # .
# . . # # .
. . . . # .
. # # . . .
[Rank] of [total]: No6 of 12
. . # # . #
. . # . # .
# . . . . .
. . . # # #
[Rank] of [total]: No7 of 12
# # . # # .
. # # . . .
. . . . . #
. . . # # .
[Rank] of [total]: No8 of 12
. # . . . .
# . # . # .
. . . # . #
# . # # # .
[Rank] of [total]: No9 of 12
. . . . . #
. . # . . .
. . # . . #
. . # # . .
[Rank] of [total]: No10 of 12
. . . . # .
# . . . . .
. . # # . .
. . . # . #
[Rank] of [total]: No11 of 12
. # . . # .
. # . # # .
. . . # . .
. # . # . #
---Global Array---
. . # . . # . . # # . . . # # # . # . . # # # #
� � < * � � e { � � � � � �
J
*** Error in `./gather2darray': double free or corruption (out): 0x0000000001e4c050 ***
*** stack smashing detected ***: ./gather2darray terminated
*** stack smashing detected ***: ./gather2darray terminated
*** stack smashing detected ***: ./gather2darray terminated
*** stack smashing detected ***: ./gather2darray terminated
*** stack smashing detected ***: ./gather2darray terminated
*** stack smashing detected ***: ./gather2darray terminated
*** stack smashing detected ***: ./gather2darray terminated
*** stack smashing detected ***: ./gather2darray terminated
*** stack smashing detected ***: ./gather2darray terminated
*** stack smashing detected ***: ./gather2darray terminated
*** stack smashing detected ***: ./gather2darray terminated
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 10979 RUNNING AT User
= EXIT CODE: 139
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Segmentation fault (signal 11)
This typically refers to a problem with your application.
Please see the FAQ page for debugging suggestions
我们将不胜感激。提前致谢。
【问题讨论】:
标签: multidimensional-array parallel-processing segmentation-fault mpi dynamic-arrays