'Recursive Function using MPI library

I am using the recursive function to find determinant of a 5x5 matrix. Although this sounds a trivial problem and can be solved using OpenMP if the dimensions are huge. I am trying to use MPI to solve this however I cant understand how to deal with recursion which is accumulating the results.

So my question is, How do I use MPI for this?

PS: The matrix is a Hilbert matrix so the answer would be 0

I have written the below code but I think it simply does the same part n times rather than dividing the problem and then accumulating result.

#include <mpi.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>

#define ROWS 5

double getDeterminant(double matrix[5][5], int iDim);
double matrix[ROWS][ROWS] = {
        {1.0, -0.5, -0.33, -0.25,-0.2},
    {-0.5, 0.33, -0.25, -0.2,-0.167},
    {-0.33, -0.25, 0.2, -0.167,-0.1428},
    {-0.25,-0.2, -0.167,0.1428,-0.125},
    {-0.2, -0.167,-0.1428,-0.125,0.111},
};

int rank, size, tag = 0;

int main(int argc, char** argv)
{
    //Status of messages for each individual rows
    MPI_Status status[ROWS];  

    //Message ID or Rank
    MPI_Request req[ROWS];  

    MPI_Init(&argc, &argv);
    MPI_Comm_size(MPI_COMM_WORLD, &size);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);

    double result;
    result = getDeterminant(matrix, ROWS);
    printf("The determinant is %lf\n", result);

    //Set barrier to wait for all processess
    MPI_Barrier(MPI_COMM_WORLD);
    MPI_Finalize();

    return 0;
}

double getDeterminant(double matrix[ROWS][ROWS], int iDim)
{
    int iCols, iMinorRow, iMinorCol, iTemp, iSign;
    double c[5];
    double tempMat[5][5];

    double dDet;
    dDet = 0;
    if (iDim == 2)
    {
        dDet = (matrix[0][0] * matrix[1][1]) - (matrix[0][1] * matrix[1][0]);
        return dDet;
    }
    else
    {
        for (iCols = 0; iCols < iDim; iCols++)
        {
            int temp_row = 0, temp_col = 0;
            for (iMinorRow = 0; iMinorRow < iDim; iMinorRow++)
            {
                for (iMinorCol = 0; iMinorCol < iDim; iMinorCol++)
                {
                    if (iMinorRow != 0 && iMinorCol != iCols)
                    {
                        tempMat[temp_row][temp_col] = matrix[iMinorRow][iMinorCol];
                        temp_col++;
                        if (temp_col >= iDim - 1)
                        {
                            temp_row++;
                            temp_col = 0;
                        }
                    }
                }
            }
            //Hanlding the alternate signs while calculating diterminant
            for (iTemp = 0, iSign = 1; iTemp < iCols; iTemp++)
            {
                iSign = (-1) * iSign;
            }
            //Evaluating what has been calculated if the resulting matrix is 2x2
            c[iCols] = iSign * getDeterminant(tempMat, iDim - 1);
        }
        for (iCols = 0, dDet = 0.0; iCols < iDim; iCols++)
        {
            dDet = dDet + (matrix[0][iCols] * c[iCols]);
        }
        return dDet;
    }
}

Expected result should be a very small value close to 0. I am getting the same result but not using MPI



Solution 1:[1]

The provided program will be executed in n processes. mpirun launches the n processes and they all executes the provided code. It is the expected behaviour. Unlike openmp, MPI is not a shared memory programming model rather a distributed memory programming model. It uses message passing to communicate with other processes.There are no global variables in MPI. All the data in your program will be local to your process.. If you need to share a data between process, you have to use MPI_send or MPI_Bcast or etc. explicitly to send it. You can use collective operations like MPI_Bcastto send it to all processes or point to point operations like MPI_send to send to specific processes.

For your application to do the expected behaviour, you have to tailor make it for MPI (unlike in openmp where you can use pragmas). All processes has an identifier or rank. Typically, rank 0 (lets call it your main process) should pass the data to all processes using MPI_Send (or any other methods) and the remaining processes should receive it using MPI_Recv (use MPI_Recv for MPI_Send). After receiving the local data from main process, the local processes should perform some computation on it and then send the results back to the main process. Main process will agreggate the result. This is a very basic scenario using MPI. You can use MPI IO etc..

MPI does not do anything by itself for synchronization or data sharing. It just launches instance of the application n times and provides required routines. It is the application developer who is in charge of communication (data structures etc), synchronization(using MPI_Barrier), etc. among processes.

Following is a simple send receive program using MPI. When you run the below code, say with n as 2, two copies of this program will be launched. In the program, using MPI_Comm_rank(), each process will get its id. We can use this ID for further computations/controlling the flow of code. In the code below, the process with rank 0 will send the variable number using MPI_Send and the process with rank 1 will receive this value using MPI_Recv. We can see that if and else if to differentiate between processes and change the control flow to send and receive the data. This is a very basic MPI program that share the data between processes.

// Find out rank, size
int world_rank;
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
int world_size;
MPI_Comm_size(MPI_COMM_WORLD, &world_size);

int number;
if (world_rank == 0) {
    number = -1;
    MPI_Send(&number, 1, MPI_INT, 1, 0, MPI_COMM_WORLD);
} else if (world_rank == 1) {
    MPI_Recv(&number, 1, MPI_INT, 0, 0, MPI_COMM_WORLD,
             MPI_STATUS_IGNORE);
    printf("Process 1 received number %d from process 0\n",
           number);
}

Here is a tutorial on MPI.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1