You are here
Kurzy pro ČR
Real-world applications often encompass end-to-end data processing pipelines composed of a large number (millions) of interconnected computational tasks of various granularity. We introduce HyperLoom as a platform for defining and executing such pipelines in distributed environments using a Python API.
The Portable Extensible Toolkit for Scientific computing (PETSc) is a modular library for scalable numerical solution of complex problems in science and engineering. It is designed primarily for typical computations connected with PDE solution, but is already successfully used also e.g. in data science. It covers large sparse matrices, linear algebra, non-linear solvers, time integrators, optimization, discretization and more.
Kurz pokračuje v tradici tutoriálů z oblasti GPGPU, tj. výpočtů na grafických kartách, které lektor vede na IT4Innovations od začátku roku 2014.
VI-HPS Tuning Workshop
This workshop, organized jointly by VI-HPS, LRZ and IT4Innovations at the Leibniz Supercomputing Centre (LRZ) in Garching, continues the series of recognized VI-HPS Tuning Workshops of the Virtual Institute – High Productivity Supercomputing (VI-HPS).
Summary and benefits for the attendees
Numerical simulations conducted on current high-performance computing (HPC) systems face an ever growing need for scalability. Larger HPC platforms provide opportunities to push the limitations on size and properties of what can be accurately simulated. Serial approaches on handling I/O in a parallel application will dominate the performance on massively parallel systems. Heterogeneity of platforms can impose a high level of maintenance, when different data representations are needed.
The course focuses on optimization oriented programming for the latest Intel Architectures for HPC. It covers state of the art Intel Architecture features, changes among the recent generations and trends to support HPC software developers and researchers in designing their applications. During the two day course also software development and analysis tools are covered as needed for C/C++ & Fortran development with the focus on performance, vectorization and energy efficiency.
The current wave of advances in Deep Learning (DL) has led to many exciting challenges and opportunities for Computer Science and Artificial Intelligence researchers alike. Modern DL frameworks like Caffe/Caffe2, TensorFlow, Cognitive Toolkit, Torch, and several others have emerged that offer ease of use and flexibility to describe, train, and deploy various types of Deep Neural Networks (DNNs) including deep convolutional networks.
As InfiniBand (IB), Omni-Path, and High-Speed Ethernet (HSE) technologies mature, they are being used to design and deploy various High-End Computing (HEC) systems: HPC clusters with GPGPUs and Xeon Phis supporting MPI, Storage and Parallel File Systems, Cloud Computing systems with SR-IOV Virtualization, Grid Computing systems, and Deep Learning systems. These systems are bringing new challenges in terms of performance, scalability, portability, reliability and network congestion.