3 Replies - 171 Views - Last Post: 06 December 2017 - 07:27 AM Rate Topic: -----

#1 lanaa  Icon User is offline

  • New D.I.C Head

Reputation: -1
  • View blog
  • Posts: 6
  • Joined: 25-November 17

Communication time and computation time in MPI

Posted 05 December 2017 - 11:30 AM

In this code I'm try to compute communication time and computation time to all program, communication time works fine, but there is an error in computation time or I do not know how can compute it correctly.

The problem is the output of `totalTime`, when printed, equals zero.
I use `MPI_reduce` to sum of the individual times for each processes,but the result zeroes

Can any one help me, please, to compute the computation time for all processes in a correct way?
#include <stdio.h>
    #include "mpi.h"
    #define N   3     /* number of rows and columns in matrix */

    MPI_Status status;
    double a[N][N],b[N][N],c[N][N];
    void print_matrix(void)
    { int i, j;
        for(i = 0; i < N; i++) {
            for(j = 0; j < N; j++) {
                 printf("%7.2f", c[i][j]);
            } //end for i
        printf("\n");
        }    //end for j
    }        //end print_matrix
    void main(int argc, char **argv)
    {
      int numtasks,taskid,numworkers,source,dest,rows,offset,i,j,k;
      double comm_time ,comp_time,totalTime;      
      MPI_Init(&argc, &argv);
      MPI_Comm_rank(MPI_COMM_WORLD, &taskid);
      MPI_Comm_size(MPI_COMM_WORLD, &numtasks);
      numworkers = numtasks-1;
      /*---------------------------- master ----------------------------*/
      if (taskid == 0) {
    /*Initialization*/
        for (i=0; i<N; i++) {
          for (j=0; j<N; j++) {
            a[i][j]= 1;
            b[i][j]= 1;
          }
         }  
     //------------------------------------------
         start = MPI_Wtime();
    //------------------------------------------
        /* send matrix data to the worker tasks */
        rows = N/numworkers;
        offset = 0;
       /* We are skipping the transfer between master and master it's silly.*/
        for (dest=1; dest<=numworkers; dest++)
        {
          MPI_Send(&offset, 1, MPI_INT, dest, 1, MPI_COMM_WORLD);
          MPI_Send(&rows, 1, MPI_INT, dest, 1, MPI_COMM_WORLD);
          MPI_Send(&a[offset][0], rows*N, MPI_DOUBLE,dest,1, MPI_COMM_WORLD);
          MPI_Send(&b, N*N, MPI_DOUBLE, dest, 1, MPI_COMM_WORLD);
          offset = offset + rows;
     /*  master should have rows=averowsbloksize */
        }
       /* wait for results from all worker tasks */
        for (i=1; i<=numworkers; i++)
        {
          source = i;
          MPI_Recv(&offset, 1, MPI_INT, source, 2, MPI_COMM_WORLD, &status);
          MPI_Recv(&rows, 1, MPI_INT, source, 2, MPI_COMM_WORLD, &status);
          MPI_Recv(&c[offset][0], rows*N, MPI_DOUBLE, source, 2,
                   MPI_COMM_WORLD, &status);
        }
     //------------------------
     finish=MPI_Wtime();
     comm_time = finish - start;
     printf("Communication time  done in %f seconds.\n", comm_time);
    // printf("Computation time in workers nodes done in %f seconds.\n", 
     comp_time);
    //------------------------
      print_matrix();
      }
      /*---------------- worker----------------------------*/
      if (taskid > 0) {
        source = 0;
        MPI_Recv(&offset, 1, MPI_INT, source, 1, MPI_COMM_WORLD, &status);
        MPI_Recv(&rows, 1, MPI_INT, source, 1, MPI_COMM_WORLD, &status);
        MPI_Recv(&a, rows*N, MPI_DOUBLE, source, 1, MPI_COMM_WORLD, &status);
        MPI_Recv(&b, N*N, MPI_DOUBLE, source, 1, MPI_COMM_WORLD, &status);

        /* Matrix multiplication */
    //-----------------------------
        start = MPI_Wtime(); 
    //-----------------------------
        for (k=0; k<N; k++)
          for (i=0; i<rows; i++) {
             for (j=0; j<N; j++)
               c[i][k] = c[i][k] + a[i][j] * b[j][k];
          }
    //-----------------------------
        comp_time = MPI_Wtime() - start;
       printf("task id = %d compppppp  %f seconds.\n",taskid , comp_time );
      
    //--------------------------------
        MPI_Send(&offset, 1, MPI_INT, 0, 2, MPI_COMM_WORLD);
        MPI_Send(&rows, 1, MPI_INT, 0, 2, MPI_COMM_WORLD);
        MPI_Send(&c, rows*N, MPI_DOUBLE, 0, 2, MPI_COMM_WORLD);
      }



     MPI_Reduce( &comp_time, &totalTime, 1, MPI_DOUBLE, MPI_SUM, 0,   
      MPI_COMM_WORLD );   
    
    if (taskid == 0){
    printf( "Total time spent in seconds id %f\n", totalTime );

     }
      MPI_Finalize();
    }


This post has been edited by Skydiver: 05 December 2017 - 11:38 AM
Reason for edit:: Fixed code tags. Please learn to use code tags.


Is This A Good Question/Topic? 0
  • +

Replies To: Communication time and computation time in MPI

#2 snoopy11  Icon User is online

  • Engineering ● Software
  • member icon

Reputation: 1410
  • View blog
  • Posts: 4,484
  • Joined: 20-March 10

Re: Communication time and computation time in MPI

Posted 05 December 2017 - 02:34 PM

Have you tried using

#define MPI_WTIME_IS_GLOBAL TRUE


at the top of your code ?
Was This Post Helpful? 0
  • +
  • -

#3 lanaa  Icon User is offline

  • New D.I.C Head

Reputation: -1
  • View blog
  • Posts: 6
  • Joined: 25-November 17

Re: Communication time and computation time in MPI

Posted 06 December 2017 - 03:40 AM

View Postsnoopy11, on 05 December 2017 - 02:34 PM, said:

Have you tried using

#define MPI_WTIME_IS_GLOBAL TRUE


at the top of your code ?


Do you mean
totalTime
??
can you explain more please ?
Was This Post Helpful? 0
  • +
  • -

#4 snoopy11  Icon User is online

  • Engineering ● Software
  • member icon

Reputation: 1410
  • View blog
  • Posts: 4,484
  • Joined: 20-March 10

Re: Communication time and computation time in MPI

Posted 06 December 2017 - 07:27 AM

No I meant MPI_Wtime(); as in start = MPI_Wtime();

You use the define at the start of your code and it makes MPI_WTime global and shared across all processors.
Was This Post Helpful? 0
  • +
  • -

Page 1 of 1