Message Passing Interface (MPI) is the most prominent programming model for parallel programming on distributed memory systems. Virtually every supercomputer in the world provides a tuned implementation of MPI, and practically every large distributed-memory application uses MPI either directly or indirectly. While MPI has evolved from a bare-minimum message passing functionality to an feature and functionality rich programming model, most users are not aware of its capabilities and consequently tend to use the minimal set of functionality.
About the tutor
Dr. Pavan Balaji is a Computer Scientist and Group Lead at Argonne National Laboratory. His team develops the MPICH implementation of MPI, which is used on nine out of the top 10 supercomputers in the world. Dr. Balaji also chairs the hybrid programming working group within the MPI Forum, and contributes to almost every aspect of the MPI standard. He is an author of the MPI-2.1, MPI-2.2, MPI-3.0 and MPI-3.1 standards, and is heavily involved in the preparation of the upcoming MPI-4 standard. He is also an editor of the "Programming Models for Parallel Computing" book that introduces all of the commonly used programming models in parallel computing in a nutshell.
|Monday December 14, 2015|
Introductory concepts in MPI
Group communication and Derived Datatypes
|Tuesday December 15, 2015|
Network locality and Topology
Notebook to access the remote system for the hands-on part.
Obligatory registration - registration form here; deadline see above or exhausted course capacity.
The event is provided free of charge for the participants.
- See a page on transport and accommodation (in Czech) how to get to the campus of VŠB - Technical University Ostrava and to the IT4Innovations building.
- Participants without the IT4Innovations card please arrive early enough to settle the formalities with obtaining an entry permit.
- System documentation is available at http://support.it4i.cz/docs.