Jste zde

Message-Passing Programming with MPI

Termín: 
Čt, 12.12.2013 10:00 - Pá, 13.12.2013 16:00
Místo: 
Institute of Geonics of the Czech Academy of Sci. (Ústav geoniky AVČR), Studentská 1768, Ostrava (conference room)
Lektor: 
David Henty (EPCC)
Úroveň: 
basic - intermediate
Jazyk: 
English

Annotation

The world’s largest supercomputers are used almost exclusively to run applications which are parallelised using Message Passing. The course covers all the basic knowledge required to write parallel programs using this programming model, and is directly applicable to almost every parallel computer architecture.

The course will be delivered in an intensive format using the Anselm supercomputer for practical exercises. It will be taught using a variety of methods including formal lectures, practical exercises, programming examples and informal tutorial discussions. This enables lecture material to be supported by the tutored practical sessions in order to reinforce the key concepts.

Parallel programming by definition involves co-operation between many processors to solve a common problem. The programmer has to define the individual tasks that will be executed by the processors, and also how these tasks are to synchronise and exchange data with one another. In the message-passing model the tasks are separate processes that communicate and synchronise by explicitly sending each other messages. All these parallel operations are performed via calls to some message-passing interface that is entirely responsible for interfacing with the physical communication network linking the actual processors together. This course uses the de facto standard for message passing, the Message Passing Interface (MPI). It covers point-to-point communication, non-blocking operations, derived datatypes, virtual topologies, collective communication and general design issues.

Purpose of the course (benefits for the attendees)

On completion of this course students should be able to:

  • Understand the message-passing model in detail.
  • Implement standard message-passing algorithms in MPI.
  • Debug simple MPI codes.
  • Measure and comment on the performance of MPI codes.
  • Design and implement efficient parallel programs to solve regular-grid problems.

Schedule (preliminary)

Thursday, December 12, 2013
10:00-11:30Message-Passing Concepts
 Practical: Parallel Traffic Modelling
11:30-12:00coffee break
12:00-13:30MPI Programs
 MPI on Anselm
 Practical: Hello World
13:30-14:30time for lunch
14:30-16:00Point-to-Point Communication
 Practical: Pi
16:00-16:30coffee break
16:30-18:00Communicators, Tags and Modes
 
Practical: Pi continued / Ping-Pong
 
 
Friday, December 13, 2013
09:00-10:30Non-Blocking Communication
 Practical: Message Round a Ring
10:30-11:00
coffee break
11:00-12:30Collective Communicaton
 Practical: Collective Communication
12:30-13:30time for lunch
13:30-15:30Introduction to the Case Study
 Practical: Case Study

Prerequisities

Notebook  and ability to program in Fortran, C or C++.  Anselm account recommended. (Temporary Anselm accounts will be created for the course participants.)

Registration

Obligatory registration - registration form here; deadline December 5, 2013 or exhausted course capacity.

Capacity

45 attendees

Practicalities

  • The venue building is marked as "UG" on the campus map below (in its upper left corner).
  • Do not forget your notebook.
  • Anselm training accounts will be distributed at the registration.
  • Anselm cluster documentation is available at http://support.it4i.cz/docs.