As InfiniBand (IB), Omni-Path, and High-Speed Ethernet (HSE) technologies mature, they are being used to design and deploy various High-End Computing (HEC) systems: HPC clusters with GPGPUs and Xeon Phis supporting MPI, Storage and Parallel File Systems, Cloud Computing systems with SR-IOV Virtualization, Grid Computing systems, and Deep Learning systems. These systems are bringing new challenges in terms of performance, scalability, portability, reliability and network congestion. Many scientists, engineers, researchers, managers and system administrators are becoming interested in learning about these challenges, approaches being used to solve these challenges, and the associated impact on performance and scalability.
This tutorial will start with an overview of these systems. Advanced hardware and software features of IB, Omni-Path, HSE, and RoCE and their capabilities to address these challenges will be emphasized. Next, we will focus on Open Fabrics RDMA and Libfabrics programming, and network management infrastructure and tools to effectively use these systems. A common set of challenges being faced while designing these systems will be presented. Finally, case studies focusing on domain-specific challenges in designing these systems (including the associated software stacks), their solutions and sample performance numbers will be presented.
Remark: The course will be preceeded by an introductory treatment of those topics in the course InfiniBand, Omni-Path, and High-Speed Ethernet for Dummies (seperate registration needed).
Purpose of the course (benefits for the attendees)
The goals and benefits of this tutorial are as follows:
- Providing an overview of current large-scale deployments of clusters in various domains and the issues being faced.
- Providing an overview of modern features (hardware and software) available in IB, Omni-Path, and 10/25/40/50/100 Gbps Ethernet to alleviate the issues of large-scale deployments.
- Providing a set of demos focusing on RDMA and Libfabrics programming, network management infrastructure and tools to effectively use these systems.
- Case studies focusing on domain-specific challenges in designing these systems (including the associated software stacks), their solutions and sample performance numbers.
About the tutors
Dhabaleswar K. (DK) Panda is a Professor and University Distinguished Scholar of Computer Science at the Ohio State University. He obtained his Ph.D. in computer engineering from the University of Southern California. His research interests include parallel computer architecture, high-performance computing, communication protocols, file systems, network-based computing, Big Data, and Deep Learning. He has published over 400 papers in major journals and international conferences related to these research areas.
Dr. Panda and his research group members have been doing extensive research on modern networking technologies including InfiniBand, Omni-Path, HSE and RDMA over Converged Enhanced Ethernet (RoCE). His research group is currently collaborating with National Laboratories and leading InfiniBand, Omni-Path, and Ethernet/iWARP companies on designing various subsystems of next generation high-end systems. The MVAPICH2 (High Performance MPI over InfiniBand, Omni-Path, iWARP, and RoCE) open-source software package, developed by his research group, are currently being used by more than 2,800 organizations worldwide (in 85 countries). These libraries are available from http://mvapich.cse.ohio-state.edu. This software has enabled several InfiniBand clusters (including the 1st one) to get into the latest TOP500 ranking. These software packages are also available with the Open Fabrics stack for network vendors (InfiniBand, Omni-Path and iWARP), server vendors and Linux distributors. The RDMA-enabled Apache Hadoop, Spark and Memcached packages, consisting of acceleration for HDFS, MapReduce, RPC and Memcached and support for clusters with Lustre file systems, are publicly available from http://hibd.cse.ohio-state.edu. These libraries are being used by more than 245 organizations in 31 countries.
The group has also been focusing on co-designing Deep Learning Frameworks and MPI Libraries. A high-performance and scalable version of the Caffe framework is available from the High-Performance Deep Learning (HiDL) Project site (http://hidl.cse.ohio-state.edu). Dr. Panda's research is supported by funding from US National Science Foundation, US Department of Energy, US Department of Defense, and several industry sponsors including Intel, Cisco, SUN, Mellanox, QLogic, Microsoft, NVIDIA and NetApp. He is an IEEE Fellow and a member of ACM.
More details about Dr. Panda, including a comprehensive CV and publications are available at: http://web.cse.ohio-state.edu/~panda.2/
Dr. Hari Subramoni is a research scientist in the Department of Computer Science and Engineering at the Ohio State University, USA, since September 2015. His current research interests include high performance interconnects and protocols, parallel computer architecture, network-based computing, exascale computing, network topology aware computing, QoS, power-aware LAN-WAN communication, fault tolerance, virtualization, big data and cloud computing. He has published over 50 papers in international journals and conferences related to these research areas. He has been actively involved in various professional activities in academic journals and conferences. Dr. Subramoni is doing research on the design and development of MVAPICH2 (High-Performance MPI over InfiniBand, iWARP, and RoCE) and MVAPICH2-X (Hybrid MPI and PGAS (OpenSHMEM, UPC, and CAF)) software packages. He is a member of IEEE.
More details about Dr. Subramoni are available at: http://www.cse.ohio-state.edu/~subramon
1. Exascale Trend and Brief Overview of different kinds of HEC Systems
2. Advanced Features for IB
3. Advanced Features for Omni-Path
4. Advanced Features for HSE
5. RDMA over Converged Ethernet (V1 and V2)
6. Demos: Open Fabrics Software Stack and RDMA Programming
* Verbs - Channel Semantics
* Verbs - RDMA Semantics
7. Demos: Libfabrics Software Stack and Programming
8. Demos: Network Management Infrastructure and Tools
* Subnet Manager
* Diagnostic Tools: Discovery, Health Monitoring, Performance Monitoring
* Fabric Management Tools
* OSU INAM
9. Common Challenges in Building HEC systems with IB and HSE: Hardware Aspects
* Network Adapters and NUMA Interactions
* Scalability and Memory Overheads
* Network Switches, Topology and Routing
* Network Bridges
10. System Specific Challenges and Case Studies
* HPC (MPI, PGAS and GPU/Xeon Phi Computing)
* Deep Learning
* Grid and Cloud Computing
11. Conclusions, Final Q&A, and Discussion
Capacity and Fees
- See the links below for how to get to the campus of VŠB - Technical University Ostrava and to the IT4Innovations building.
- Documentation for IT4Innovations' computer systems is available at https://docs.it4i.cz/.