Webinar 2016 Introduction to MPI - Part III

From SHARCNETHelp
Jump to navigationJump to search

(NOTE: This seminar is a continuation of Part I and Part II seminars given on Nov. 11 and Nov. 25 2015. If you missed them, the recordings are posted on SHARCNET’s YouTube channel.)

This seminar will build on the introduction to MPI (Message Passing Interface) Part I and Part II seminars, introducing more advanced features of MPI which permit the writing of more efficient parallel programs.

The seminar will cover derived data types which permit grouping individual data items in a single message. Since sending a message is always costly, reducing the number of messages required is highly beneficial. Using derived datatypes can also make the code simpler, allowing complex data communication patterns to be implemented in fewer lines of code.

The talk will also cover user-defined communicators and topologies. A communicator is a collection of processes that can send messages to each other, and a topology is a structure imposed on the processes in a communicator that allows the processes to be addressed in different ways. Creating a communicator grouping a subset of processes allows for collective communications restricted to that subset of processes, which is highly useful for some problems.

The example programs in this talk will be implemented in C.