【发布时间】:2014-04-21 08:35:52
【问题描述】:
我正在尝试生成两个大小为 n 的矩阵 A&B,将它们划分为 s*s 子矩阵,并在将它们分散到处理器之后,在块矩阵之间执行乘法运算。我已经能够通过处理器成功生成和分散子矩阵;但是,我被困在每个处理器的子矩阵上执行乘法。我的代码与以下帖子中的代码(答案部分中的代码)非常相似,但我针对两个矩阵对其进行了修改: MPI partition matrix into blocks
您能告诉我如何修改它以执行乘法吗?
为了便于跟进,我一直使用相同的标签。
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
#include <time.h>
#define COLSa 10
#define ROWSa 10
#define COLSb 10
#define ROWSb 10
#define s 2
int main(int argc, char **argv) {
MPI_Init(&argc, &argv);
int p, rank;
MPI_Comm_size(MPI_COMM_WORLD, &p);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
char i;
char j;
char a[ROWSa*COLSa];
char b[ROWSb*COLSb];
char c[ROWSa*COLSb]; // c=a*b
const int NPROWS=s; /* number of rows in _decomposition_ */
const int NPCOLS=s; /* number of cols in _decomposition_ */
const int BLOCKROWSa = ROWSa/NPROWS; /* number of rows in _block_ */
const int BLOCKCOLSa = COLSa/NPCOLS; /* number of cols in _block_ */
const int BLOCKROWSb = ROWSb/NPROWS; /* number of rows in _block_ */
const int BLOCKCOLSb= COLSb/NPCOLS; /* number of cols in _block_ */
if (rank == 0) {
for (int ii=0; ii<ROWSa*COLSa; ii++) {
a[ii]=rand() %10 ;
}
for (int ii=0; ii<ROWSb*COLSb; ii++) {
b[ii]=rand() %10 ;
}
}
char BLa[BLOCKROWSa*BLOCKCOLSa];
for (int ii=0; ii<BLOCKROWSa*BLOCKCOLSa; ii++)
BLa[ii] = 0;
char BLb[BLOCKROWSb*BLOCKCOLSb];
for (int ii=0; ii<BLOCKROWSb*BLOCKCOLSb; ii++)
BLb[ii] = 0;
char BLc[BLOCKROWSa*BLOCKCOLSb];
for (int ii=0; ii<BLOCKROWSa*BLOCKCOLSb; ii++)
BLc[ii] = 0;
MPI_Datatype blocktype;
MPI_Datatype blocktype2;
MPI_Type_vector(BLOCKROWSa, BLOCKCOLSa, COLSa, MPI_CHAR, &blocktype2);
MPI_Type_vector(BLOCKROWSb, BLOCKCOLSb, COLSb, MPI_CHAR, &blocktype2);
MPI_Type_create_resized( blocktype2, 0, sizeof(char), &blocktype);
MPI_Type_commit(&blocktype);
int dispsa[NPROWS*NPCOLS];
int countsa[NPROWS*NPCOLS];
int dispsb[NPROWS*NPCOLS];
int countsb[NPROWS*NPCOLS];
//*******************************Start Time Record****************//
clock_t t;
t=clock();
for (int ii=0; ii<NPROWS; ii++) {
for (int jj=0; jj<NPCOLS; jj++) {
dispsa[ii*NPCOLS+jj] = ii*COLSa*BLOCKROWSa+jj*BLOCKCOLSa;
countsa [ii*NPCOLS+jj] = 1;
}
}
MPI_Scatterv(a, countsa, dispsa, blocktype, BLa, BLOCKROWSa*BLOCKCOLSa, MPI_CHAR, 0, MPI_COMM_WORLD);
for (int ii=0; ii<NPROWS; ii++) {
for (int jj=0; jj<NPCOLS; jj++) {
dispsb[ii*NPCOLS+jj] = ii*COLSb*BLOCKROWSb+jj*BLOCKCOLSb;
countsb [ii*NPCOLS+jj] = 1;
}
}
MPI_Scatterv(b, countsb, dispsb, blocktype, BLb, BLOCKROWSb*BLOCKCOLSb, MPI_CHAR, 0, MPI_COMM_WORLD);
for (int proc=0; proc<p; proc++) {
if (proc == rank) {
printf("Rank = %d\n", rank);
if (rank == 0) {
printf("Global matrix A : \n");
for (int ii=0; ii<ROWSa; ii++) {
for (int jj=0; jj<COLSa; jj++) {
printf("%3d ",(int)a[ii*COLSa+jj]);
}
printf("\n");
}
printf("\n");
printf("Global matrix B : \n");
for (int ii=0; ii<ROWSb; ii++) {
for (int jj=0; jj<COLSb; jj++) {
printf("%3d ",(int)b[ii*COLSb+jj]);
}
printf("\n");
}
printf("\n");
printf("Local Matrix A:\n");
for (int ii=0; ii<BLOCKROWSa; ii++) {
for (int jj=0; jj<BLOCKCOLSa; jj++) {
printf("%3d ",(int)BLa[ii*BLOCKCOLSa+jj]);
}
printf("\n");
}
printf("\n");
printf("Local Matrix B:\n");
for (int ii=0; ii<BLOCKROWSb; ii++) {
for (int jj=0; jj<BLOCKCOLSb; jj++) {
printf("%3d ",(int)BLb[ii*BLOCKCOLSb+jj]);
}
printf("\n");
}
}
printf("Local Matrix A:\n");
for (int ii=0; ii<BLOCKROWSa; ii++) {
for (int jj=0; jj<BLOCKCOLSa; jj++) {
printf("%3d ",(int)BLa[ii*BLOCKCOLSa+jj]);
}
printf("\n");
}
printf("Local Matrix B:\n");
for (int ii=0; ii<BLOCKROWSb; ii++) {
for (int jj=0; jj<BLOCKCOLSb; jj++) {
printf("%3d ",(int)BLb[ii*BLOCKCOLSb+jj]);
}
printf("\n");
}
//**********************Multiplication***********************//
for (int i = 0; i < BLOCKROWSa; i++) {
for (j = 0; j < BLOCKCOLSb; j++) {
for (k = 0; k < BLOCKCOLSb; k++) { //I am considering square matrices with the same sizes
BLc[i + j*BLOCKROWSa] += BLa[i + k*BLOCKROWSa]*BLb[k + BLOCKCOLb*j];
printf("%3d ",(int)BLc[i+j*BLOCKROWSa]);
}
printf("\n");
}
printf("\n");
}
}
MPI_Barrier(MPI_COMM_WORLD);
}
MPI_Finalize();
//**********************End Time Record************************//
t=clock()-t;
printf("It took %f seconds (%d clicks).\n",t,((float)t)/CLOCKS_PER_SEC);
return 0;
}
【问题讨论】:
-
除了缺少
int k;和BLc[i + j*BLOCKROWSa] += BLa[i + k*BLOCKROWSa]*BLb[k + BLOCKCOLb*j];变成BLc[i + j*BLOCKROWSa] += BLa[i + k*BLOCKROWSa]*BLb[k + BLOCKCOLSb*j];(一个S更多),只要您希望执行乘法运算,您的代码没有什么特别奇怪的地方块矩阵之间。为什么你认为你被困住了?为什么你对你的代码不满意?它与mpicc main.c -o main -std=c99和mpirun -np 4 main一起使用。 -
嗨弗朗西斯。感谢您的评论和更正。但是,通过这段代码,由于每个处理器的乘法结果,我无法得到一个单一的矩阵,出于某种原因,我在每个处理器上得到了 5 个!
-
好的,我设法修复了那个部分,现在我得到了一个单一的产品结果。但是乘法是不正确的!数学有问题!
-
一个小补充,与您的问题无关。您正在使用 rand(),但尚未使用 srand 启动种子。所以每次运行你实际上都在使用相同的矩阵。您可以添加一个 ''srand(time(NULL)) '' 来修复那个。
标签: matrix mpi matrix-multiplication