Difference between revisions of "Webinar 2016 Hybrid MPI and OpenMP Parallel Programming"

From SHARCNETHelp
Jump to navigationJump to search
imported>Syam
(Created page with "Current high-performance computing (HPC) systems feature a hierarchical hardware design: distributed memory across nodes and shared memory with multi-core within each node. Pa...")
 
(No difference)

Latest revision as of 13:05, 20 July 2016

Current high-performance computing (HPC) systems feature a hierarchical hardware design: distributed memory across nodes and shared memory with multi-core within each node. Parallel programming can combine distributed memory parallelization (MPI) with shared memory parallelization inside each node (OpenMP) to achieve overall performance, to reduce communication needs and memory consumption, or to improve load balance for some applications.

In this seminar, we will describe the difference between message passing and shared memory models, the basic principles in hybrid parallel programming approach, how to program basic hybrid codes, and finally we will talk about how to compile and execute hybrid MPI+OpenMP code on SHARCNET clusters.

This seminar is for those who have basic understanding of MPI and OpenMP, and also for those who use third-party hybrid software on SHARCNET clusters.