Webinar 2015 Introduction to MPI

From SHARCNETHelp
Revision as of 17:58, 2 December 2015 by imported>Ppomorsk (Ppomorsk moved page Webinar 2015 Introduction to MPI – Part I to Webinar 2015 Introduction to MPI: can have abstracts for both parts here)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

Introduction to MPI – Part I

Have you discovered that you need to learn about and how to write parallel codes using Message Passing Interface (MPI) for your research? This talk is aims to enable those needing to do such! Don’t worry, no prior MPI knowledge or experience is needed! MPI programs can be written in C, C++, or Fortran so prior C, C++, and/or Fortran programming experience is needed. Since there is a one-to-one mapping of MPI C calls to Fortran MPI calls, to keep the presentation straight-forward, the code examples given in the presentation will use C and/or C++, but after the talk equivalent C, C++, and Fortran programs for all examples presented will be made available. This talk will cover the following: (i) an overview of what MPI is and enables one to do, (ii) demonstrate a simple a “Hello World!” program, (iii) demonstrate how to compile and run MPI programs, (iv) present an overview of some of the basic MPI communications API calls, and (v) begin to discuss and demonstrate with sample programs how to think about and organize MPI programs to run things in parallel. (NOTE: There will be Part II talk continuing where this talk ends, i.e., the Nov. 25, 2015 SHARCNET General Interest webinar).

Introduction to MPI – Part II

This talk will build on the introduction to MPI (Message Passing Interface) Part I talk, introducing some more advanced features such as collective and non-blocking communications. Collective communications are implemented in a set of standard MPI routines, and they permit efficient exchange of information between processes without extra effort from the programmer when communication occurs in a standard, structured pattern. Examples of collective communications include broadcasts and reductions. Non-blocking communications allow to overlap communication with computation. Since communications are generally slow compared to computations, having such an overlap is often necessary to produce an efficient MPI code. The example programs in this talk will be implemented in C.