09:00 - 10:00
Registration, Foyer
10:00 - 10:05
Welcome to the Conference, Room A
10:05 - 10:10
Words from the Conference Chairs, Room A
Chair:
Olaf Schenk (Università della Svizzera Italiana, Switzerland)
10:10 - 11:00
IP01 Efforts on Scaling and Optimizing Climate and Weather Forecasting Programs on Sunway TaihuLight
Presenter:
Haohuan Fu (Tsinghua University, China)
+ Abstract + Biography
Chair:
Dagmar Iber (ETH Zurich, Switzerland)
11:00 - 11:50
IP02 Towards the Decoding of the Human Brain
Presenter:
Katrin Amunts (Forschungszentrum Jülich, Germany)
+ Abstract + Biography
11:50 - 13:00
Lunch, Foyer
13:00 - 15:00
Minisymposia and Papers Sessions
Chair:
Tim Robinson (ETH Zurich / CSCS, Switzerland)
13:00 - 13:30
Towards the Virtual Rheometer: High Performance Computing for the Red Blood Cell Microstructure
Presenter:
Eva Athena Economides (ETH Zurich, Switzerland)
Track(s):
Engineering
+ Abstract + Paper
13:30 - 14:00
A Computational Framework to Assess the Influence of Changes in Vascular Geometry on Blood Flow
Presenter:
John Gounley (Duke University, United States of America)
Track(s):
Life Sciences
+ Abstract + Paper
14:00 - 14:30
Increasing the Efficiency of Sparse Matrix-Matrix Multiplication with a 2.5D Algorithm and One-Sided MPI
Presenter:
Alfio Lazzaro (University of Zurich, Switzerland)
Track(s):
Computer Science & Applied Mathematics
+ Abstract + Paper + Presentation
14:30 - 15:00
Evaluation of a Directive-Based GPU Programming Approach for High-Order Unstructured Mesh Computational Fluid Dynamics
Presenter:
Kunal Puri (Israel Institute of Technology, Israel)
Track(s):
Engineering
+ Abstract + Paper
Organizer(s):
Tarje Nissen-Meyer (University of Oxford, United Kingdom)
Track(s):
Solid Earth Dynamics

Linear elastodynamics is a physically and mathematically well-understood problem, and numerical techniques have been successfully developed and applied for decades. However, the exuberant scale of the problem reaching 10^12 degrees of freedom, as well as the multi-scale complexity of the underlying parameter space render seismic applications as some of the most challenging HPC problems in the physical sciences. This is particularly the case for the inverse problem in mapping observations to parameters using millions of observations. Technical bottlenecks arise on many ends: meshing complex geological 3D structures, scalability, adaptivity to emerging architectures, data infrastructure for millions of simulations, provenance, code usability.

In this minisymposium, we will hear about a diverse range of topics covering state-of-the-art wave propagation at scales between the globe and the human body, each with specifically adaptive techniques and their HPC solutions. Many of the talks will be based on different variants of the spectral-element technique, which has been dominating large-scale seismology for the past two decades. Novel variants include its scalable adaptation to tetrahedra, a new flexible implementation in C++, coupling to pseudo-spectral approaches, and scaling on emerging architectures. Other techniques will be the discontinuous Galerkin method for dynamic earthquake rupture, and an immersive approach to couple numerical modeling with wave tank experiments on FPGAs.

Many of the talks will be driven by requirements from specific applications such as nonlinear earthquake rupture dynamics in a complex 3D geological fault system, multiscale geological structures at scales reaching the deep Earth interior, wave tank experiments, seismic tomography at large and industrial scales, and an application to breast cancer detection using ultrasound.

In the discussion, we will strive to identify common bottlenecks, ideas to adapt to emerging architectures, any possible basic set of common algorithmic solutions, and discuss where one could move forward to consolidate different approaches based on any commonalities such as meshing, MPI-approaches, data infrastructures, or numerical solvers.

13:00 - 13:30
Seismic Wave Propagation on Emerging HPC Architectures
Presenter:
Daniel Peter (King Abdullah University of Science and Technology, Saudi Arabia)
+ Abstract
13:30 - 14:00
C++ Template Mixins in Salvus: A Platform for Multiscale Waveform Modeling and Inversion
Presenter:
Michael Afanasiev (ETH Zurich, Switzerland)
+ Abstract
14:00 - 14:30
Immersive Wave Experimentation: Extreme Low-Latency and HPC Requirements Enabled by FPGA Technology
Presenter:
Dirk-Jan van Manen (ETH Zurich, Switzerland)
+ Abstract
14:30 - 15:00
Petascale Computation of Multi-Physics Seismic Simulations - and Beyond
Presenter:
Alice-Agnes Gabriel (Ludwig Maximilian University of Munich, Germany)
+ Abstract
Organizer(s):
Markus Huber (TU Munich, Germany)
Track(s):
Computer Science & Applied Mathematics

In this series of two minisymposia, a special focus lies on providing a platform for exchanging ideas about scalable, memory-efficient, fast and resilient solving techniques. These characteristics are crucial for science and engineering-driven applications making use of exascale computing like geophysics, astrophysics, aerodynamics, etc. Algorithms in high-performance computing require a rethinking of the standard approaches to ensure on the one hand the full usage of the future computing power, and on the other hand energy-efficiency. A careful implementation of all performance-relevant parts and an intelligent combination with external libraries is fundamental for exascale computation. Multigrid and domain decomposition play an important role in many scientific applications and have often developed ideas separately using the immense compute power of supercomputers. The minisymposia address both communities and focus on exchanging current research progress related to exascale enabled solving techniques.

13:00 - 13:30
Parallel Nonlinear Domain Decomposition in Multiscale Problems
Presenter:
Oliver Rheinbach (TU Bergakademie Freiberg, Germany)
+ Abstract
13:30 - 14:00
FETI Based Solvers for Exascale Computations in Mechanics
Presenter:
Tomáš Kozubek (TU Ostrava, Czech Republic)
+ Abstract
14:00 - 14:30
Recent Advances in Multilevel Domain Decomposition: Large Scales, Heterogeneous Problems, and Unfitted Finite Elements
Presenter:
Santiago Badia (Polytechnic University of Catalonia, Spain)
+ Abstract
14:30 - 15:00
GPU-Accelerated Matrix-Free Methods in Geophysics: Case Studies in pTatin3d and StagYY
Presenter:
Karl Rupp (TU Wien, Austria)
+ Abstract
Organizer(s):
Paolo Bientinesi (RWTH Aachen University, Germany)
Track(s):
Computer Science & Applied Mathematics

The language of linear algebra is ubiquitous across scientific and engineering disciplines and is used to describe phenomena and algorithms alike. The translation of linear algebra expressions to high-performance code is a surprisingly challenging problem, requiring knowledge in high-performance computing, compilers, and numerical linear algebra. Typically, the user is offered two contrasting alternatives: either high-level languages (e.g. Matlab), which enable fast prototyping at the expense of performance, or low-level languages (C and Fortran), which allow for highly efficient solutions at the expense of extremely long development cycles. This workshop brings together domain specialists who believe that productivity and high-performance need not be mutually exclusive.

13:00 - 13:30
LAMP: The Linear Algebra Mapping Problem
Presenter:
Paolo Bientinesi (RWTH Aachen University, Germany)
+ Abstract
13:30 - 14:00
Hydra: From Linear Equation to Parallel Heterogeneous Code
Presenter:
Denis Barthou (INRIA, France)
+ Abstract
14:00 - 14:30
Armadillo: C++ Template Metaprogramming for Compile-Time Optimization of Linear Algebra
Presenter:
Ryan Curtin (Symantec Corporation, United States of America)
+ Abstract
14:30 - 15:00
Design Choices for Numerical Linear Algebra in a Compiled High-Level Language
Presenter:
Jiahao Chen (Massachusetts Institute of Technology, United States of America)
+ Abstract
Organizer(s):
Stefan Goedecker (University of Basel, Switzerland)
Track(s):
Chemistry & Materials, Physics

At the beginning of the current century we are facing massive challenges due to the increasing global demand for energy, focused on two major issues. On one hand, conventional fossil fuel resources such as oil, natural gas, and coal are limited and dwindling. On the other hand, the emissions due to combustion of fossil fuels evidently impact the chemical composition of our atmosphere, leading to adverse effects on the climate and environment. These inevitable global challenges imminently demand for technological advances in energy conversion, storage and transport. The search for novel materials for energy applications has recently become an extremely active area of research worldwide to address these issues, with efforts in chemistry, solid-state physics, and materials science, via the Materials Genome Initiative in the US and related initiatives in other countries. In this search, computational tools are being actively developed not only to explore uncharted chemical space of new materials, but also to understand the complex interplay of materials properties with the underlying crystal structures.

One particular class of materials for energy applications are thermoelectric materials, which are required to drive thermoelectric generators that allow for a reliable, clean, emission-free conversion of (waste) heat into electricity. Until the mid-1990s, thermoelectrics had been considered inefficient and not economically relevant, but with enhanced structural engineering and intense research on novel complex materials the interest for thermoelectric materials has been recently revived. The efficiency of a thermoelectric material is governed by the so-called figure of merit zT, which is maximized by increasing the thermopower and electrical conductivity while reducing the thermal conductivity. These materials properties are however strongly interrelated, e.g. in most materials the thermal and electrical conductivities correlate with each other through the Wiedemann-Franz-law. Hence, the search for a material with a maximal zT poses a non-trivial materials design challenge.

This symposium aims at bringing together scientists to share their computational efforts in thermoelectric materials development. An accurate description of the bulk lattice thermal transport, which is governed by phonon-phonon interactions, demands advanced simulation techniques and large HPC infrastructures. Solving the Boltzmann phonon transport equation requires the knowledge of the anharmonic energy contributions which give rise to phonon scattering, posing one of the computationally most demanding aspects in modeling thermal resistivity. Density functional perturbation theory and finite difference methods are currently state-of-the-art approaches, but remain computationally highly demanding. Furthermore, the interactions of phonons with electrons become increasingly important at elevated temperatures and have recently been the focus of research in thermoelectric materials. Finally, methods for modeling of transport properties on a large scale are required for the discovery of new materials with improved thermoelectric properties. The focus of this symposium will be on novel approaches, amongst others based on methods from machine learning, signal processing, and high-throughput techniques, in modeling transport properties to advance in silico discovery of thermoelectric materials.

13:00 - 13:30
First-Principles Studies of Strongly Anharmonic Crystalline Solids
Presenter:
Vidvuds Ozolins (Yale University, United States of America)
+ Abstract
13:30 - 14:00
Thermal Transport from First-Principles: Phonons, Relaxons, and Transport Waves
Presenter:
Nicola Marzari (EPFL, Switzerland)
+ Abstract
14:00 - 14:30
Ab Initio Calculations of the Lattice Thermal Conductivity and the Discovery of New Thermoelectric Materials
Presenter:
Laurent Chaput (University of Lorraine, France)
+ Abstract
14:30 - 15:00
Systematic Data Collection from First-Principles Lattice Thermal Conductivity Calculation and its Analysis
Presenter:
Atsushi Togo (Kyoto University, Japan)
+ Abstract
Organizer(s):
Ramesh Balakrishnan (Argonne National Laboratory, United States of America)
Track(s):
Engineering

The simulation of turbulent flows in engineering applications is often characterized by high Reynolds number, physical processes that depend on length scales that are too small to be resolved, and complex geometry. The advances in computing hardware notwithstanding, it is becoming clear that large eddy simulation (LES) of such flows, where the resolved/filtered scales of motion are evolved, and the unresolved scaled are modeled, is still intractable. The large number of grid points required for a well-resolved simulation of the flow physics places a greater need for modeling the unresolved scales and evolving the resolved scales in a manner whereby the dissipative and dispersive errors are minimized. Given that in a LES, the errors in the solution are a combination of filtering errors that are characterized by the filter width (delta), the numerical order of accuracy, which is characterized by the cell width (h), and the sub-grid scale (SGS) modeling error, an assessment of the resulting “solution” is complicated by the fact that it is difficult to isolate the effects of each of these contributing factors. This area of research offers opportunities to quantify the tradeoffs between the computational advantages offered by higher-order numerical methods and the turbulence resolving capabilities of these numerical methods in the context of LES of high Reynolds number flows. On emerging exascale computing platforms, where the available power is capped at 20MW, the architecture is beginning to be characterized by processors with a large number of cores that run at dynamic clock speeds that decrease as the cores begin to overheat, deep memory hierarchies with less on-chip memory, and multiple pathways to parallelizing algorithms, ranging from coarse-grained parallelism (MPI) to fine-grained parallelism (threads, vectorization). The reduced on-chip memory requires time consuming operations to fetch data from external (off-chip) memory to local cache in order to make computations possible! On such platforms, the traditional measure of assessing the efficiency of parallel codes, via FLOPs alone, is being replaced by the more meaningful arithmetic intensity (AI), which is defined as the ratio of FLOPs to the number of load-store operations. For turbulence simulations, it would appear that higher-order numerical methods that are less memory bandwidth limited may offer an obvious advantage. However, despite a veritable body of literature that documents the advantages of higher-order methods when applied to problems that are ideal – in the sense that the cases to which they have been applied are those where one has a fairly high degree of control on the inflow and boundary conditions, and the geometry of the computational domain – it is not quite clear if these methods could serve as the gold standard when one applies higher-order methods to drive the compute engine in a predictive flow simulation tool, with considerable uncertainties in the flow conditions and complexities in the geometry. The focus of this minisymposium, therefore, is a presentation of higher-order numerical discretizations, their impacts on the resolvable turbulent flow physics, and the scaling and parallel performance of higher-order discretizations on emerging computing hardware.

13:00 - 13:30
Numerical Simulations and Robust Predictions of Cloud Cavitation Collapse
Presenter:
Fabian Wermelinger (ETH Zurich, Switzerland)
+ Abstract
13:30 - 14:00
Scalable Implicit Flow Solver for Realistic Wing Simulations with Flow Control
Presenter:
Michel Rasquin (Cenaero, Belgium)
+ Abstract
14:00 - 14:30
Developing Discontinuous Galerkin Methods for Large Eddy Simulation of Multistage Turbomachinery
Presenter:
Michel Rasquin (Cenaero, Belgium)
+ Abstract
14:30 - 15:00
Scale-Resolving Simulations with SU2 for Wind Engineering Applications
Presenter:
Ramesh Balakrishnan (Argonne National Laboratory, United States of America)
+ Abstract
Organizer(s):
Marie-Christine Sawley (Intel Semiconductor AG, Switzerland)
Track(s):
Computer Science & Applied Mathematics, Physics

The recent observation of different tiers of TOP 500 show a growing impact of computing systems based on architectural features such as: more complex memory hierarchies incorporating fast yet limited memory per core, addition of large capacity non-volatile memory, substantial increase in cores per shared-memory "island", and the closer and closer integration of high-performance interconnects with the CPU and memory subsystem. Application codes that want to take advantage of such systems need to reshape to achieve good levels of performance. At the same time, it is important to ensure that codes can be maintained and developed further without undue complexity imposed by the execution systems. Moreover, significant advances in the field of reconfigurable computing and their system integration have generated major interest: new generations provide highly efficient FP units, and fast cache-coherent interconnects to CPUs were announced. On the SW side, the momentum around OpenCL is lowering the entry barriers. Tighter Integration of FPGAs and CPUs will allow traditional FPGA workloads to get closer to the more "general purpose" server and require less specialized custom boards.

These emerging computing architectures generate innovative developments in computer science and applied mathematics, which in return, enable new capacities for scientific applications. The minisymposium will be an excellent opportunity to share some of the most recent, leading edge advances for scientific applications enabled by these trends. New algorithmic developments, and how to code them on the emerging architectures will be at the heart of the session: new PDE solvers for QCD, efficient implementation of Fast Multiple Methods applied to biomolecular simulations, how to use the cache-aware-roofline model to guide the performance optimization, and high energy physics workloads projecting efficient usage of nodes combining Xeon and FPGA.

13:00 - 13:30
The Fast Multipole Method and Point Dipole Moment Polarizable Force Fields
Presenter:
Jonathan Coles (TU Munich, Germany)
+ Abstract
13:30 - 14:00
Grid: A Data Parallel Structured Cartesian Framework for Lattice QCD Calculations
Presenter:
Peter Boyle (University of Edinburgh, United Kingdom)
+ Abstract
14:00 - 14:30
Modernizing Code for the Intel "Knights Landing" Processor: Real World Tales from the IXPUG Trenches
Presenter:
Michael Lysaght (Intel Inc., Ireland)
+ Abstract
14:30 - 15:00
FPGA Acceleration for High-Throughput Data Processing in High-Energy Physics Experiments
Presenter:
Christian Faerber (CERN, Switzerland)
+ Abstract
Organizer(s):
Tom Vander Aa (IMEC, Belgium)
Track(s):
Emerging Domains in HPC, Life Sciences

The applied life sciences are of huge importance, both economically (the pharmaceutical sector alone supplies something like 30% of the exports of Switzerland), and in terms of tackling societal challenges such as aging populations, the increasing burden of chronic diseases, and spiraling costs of health care. The challenges in the industry are many and varied, and include the need for a much better understanding of why certain promising drugs fail trials, how best to identify and model sub-groups in patient populations to deliver on the promises of precision medicine, and how to integrate information and models ranging from the molecular level up to patient-worn sensors.

Due to their importance, the applied life sciences should be supported with the best possible tools to tackle these challenges. Whilst the use of computing is reasonably well established in this sector, the use of High-Performance Computing (HPC) is much less well established when compared to other sectors such as engineering or physics. This is indeed unfortunate given the potential of in silico experiments and analysis of complex data to advance the state of the art in the field and deliver concrete benefits to society.

In this minisymposium we will explore various approaches to the use of computing for the applied life sciences, ranging from lower-level systems modeling, through the application of large-scale machine learning to high volume screens in drug discovery, to analysis of genomic information. Each of these has different modeling and scaling challenges and has had varying levels of success in the application of HPC to the problem. We will have speakers from industry, academia and industrial-academic collaborations alike, giving varying perspectives on the state of the art and the potential for the application of HPC.

The specific areas covered by the speakers will be the following:
- HPC implementation of multi-target compound activity prediction in chemogenomics based on state of the art large scale machine learning techniques
- Challenges in data handling and computation for the analysis of DNA for personalized healthcare
- Systems biology and HPC

13:00 - 13:30
Federated Analysis of Genomes for Personalized Healthcare
Presenter:
Satu Nahkuri (Roche, Switzerland)
+ Abstract
13:30 - 14:00
HPC for Inferring Causal Biological Networks
Presenter:
Joerg Stelling (ETH Zurich, Switzerland)
+ Abstract
14:00 - 14:30
Practical Experience with Software for Next-Generation Sequence Analyses in HPC Environment
Presenter:
Martin Mokrejš (TU Ostrava, Czech Republic)
+ Abstract
14:30 - 15:00
Machine Learning for Chemogenomics on HPC: Progress in the ExCAPE Project
Presenter:
Tom Vander Aa (IMEC, Belgium)
+ Abstract
15:00 - 15:30
Coffee Break, Foyer
15:30 - 17:30
Minisymposia and Papers Sessions
Chair:
Jack Wells (Oak Ridge National Laboratory, United States of America)
15:30 - 16:00
Asynchronous Task-Based Parallelization of Algebraic Multigrid
Presenter:
Amani Alonazi (King Abdullah University of Science and Technology, Saudi Arabia)
Track(s):
Computer Science & Applied Mathematics
+ Abstract + Paper
16:00 - 16:30
Scheduling Finite Difference Approximations for DAG-Modeled Large Scale Applications
Presenter:
Xavier Meyer (University of Lausanne, Switzerland)
Track(s):
Computer Science & Applied Mathematics
+ Abstract + Paper
16:30 - 17:00
Pareto Optimal Swimmers
Presenter:
Siddhartha Verma (ETH Zurich, Switzerland)
Track(s):
Life Sciences
+ Abstract + Paper
Organizer(s):
Tarje Nissen-Meyer (University of Oxford, United Kingdom)
Track(s):
Solid Earth Dynamics

Linear elastodynamics is a physically and mathematically well-understood problem, and numerical techniques have been successfully developed and applied for decades. However, the exuberant scale of the problem reaching 10^12 degrees of freedom, as well as the multi-scale complexity of the underlying parameter space render seismic applications as some of the most challenging HPC problems in the physical sciences. This is particularly the case for the inverse problem in mapping observations to parameters using millions of observations. Technical bottlenecks arise on many ends: meshing complex geological 3D structures, scalability, adaptivity to emerging architectures, data infrastructure for millions of simulations, provenance, code usability.

In this minisymposium, we will hear about a diverse range of topics covering state-of-the-art wave propagation at scales between the globe and the human body, each with specifically adaptive techniques and their HPC solutions. Many of the talks will be based on different variants of the spectral-element technique, which has been dominating large-scale seismology for the past two decades. Novel variants include its scalable adaptation to tetrahedra, a new flexible implementation in C++, coupling to pseudo-spectral approaches, and scaling on emerging architectures. Other techniques will be the discontinuous Galerkin method for dynamic earthquake rupture, and an immersive approach to couple numerical modeling with wave tank experiments on FPGAs.

Many of the talks will be driven by requirements from specific applications such as nonlinear earthquake rupture dynamics in a complex 3D geological fault system, multiscale geological structures at scales reaching the deep Earth interior, wave tank experiments, seismic tomography at large and industrial scales, and an application to breast cancer detection using ultrasound.

In the discussion, we will strive to identify common bottlenecks, ideas to adapt to emerging architectures, any possible basic set of common algorithmic solutions, and discuss where one could move forward to consolidate different approaches based on any commonalities such as meshing, MPI-approaches, data infrastructures, or numerical solvers.

15:30 - 16:00
Waveform Tomography for Breast Cancer Detection with Ultrasound
Presenter:
Christian Boehm (ETH Zurich, Switzerland)
+ Abstract
16:00 - 16:30
Scalable Wave Propagation in Multiscale Media Based on a Pseudospectral/Spectral-Element Method with Particle Relabeling
Presenter:
Tarje Nissen-Meyer (University of Oxford, United Kingdom)
+ Abstract
16:30 - 17:00
A Spectral Finite Element Discretisation for Elastodynamics on Unstructured Triangle and Tetrahedral Meshes
Presenter:
Dave A. May (University of Oxford, United Kingdom)
+ Abstract
17:00 - 17:30
GeoPC: Composable Solvers for Geophysics on Modern Architectures
Presenter:
Patrick Sanan (Università della Svizzera italiana, Switzerland)
+ Abstract
Organizer(s):
Dagmar Iber (ETH Zurich, Switzerland)
Track(s):
Life Sciences, Computer Science & Applied Mathematics

This minisymposium will focus on computational approaches to simulate tissue dynamics. Recent advances in algorithms, hardware, and microscopy enable more sophisticated and realistic simulations of tissue dynamics. A variety of simulation frameworks are being developed to capture different aspects of tissue dynamics. Each method has its advantages and disadvantages in terms of resolution, realism, and computational efficiency. This minisymposium will present a variety of state-of-the-art methods and their applications in biology. 

The four talks will present interface-capturing methods such as the phase-field method, vertex models, as well as LBIBCell, a simulation framework that permits tissue simulations at cellular resolution by combining the Lattice-Boltzmann method for fluid and reaction dynamics with an immersed boundary condition to capture the elastic properties of tissues and to permit fluid-structure interactions.

The minisymposium will thereby offer an overview on state-of-the-art approaches to tissue simulations, and highlight recent advances and remaining challenges.

15:30 - 16:00
Phase-Field Based Simulations of Embryonic Branching Morphogenesis
Presenter:
Lucas D. Wittwer (ETH Zurich, Switzerland)
+ Abstract
16:00 - 16:30
Virtual Erythrocyte: Constitutive Relations and Fluid­cell Interactions
Presenter:
Sergey Litvinov (ETH Zurich, Switzerland)
+ Abstract
16:30 - 17:00
Cell Monolayer Mechanics and Acomys Spine Morphogenesis
Presenter:
Aziza Merzouki (University of Geneva, Switzerland)
+ Abstract
17:00 - 17:30
LBIBCell: A Cell-Based Simulation Framework to Explore the Impact of Cell Mechanics on Tissue Organisation and Growth
Presenter:
Dagmar Iber (ETH Zurich, Switzerland)
+ Abstract
Organizer(s):
Niclas Jansson (Royal Institute of Technology, Sweden)
Track(s):
Computer Science & Applied Mathematics, Engineering

The complexities and nature of fluid flows imply that the resources needed to computationally model problems of industrial and academic relevance are virtually unbounded. CFD simulations, therefore, are a natural driver for exascale computing and have the potential for substantial societal impact, like reduced energy consumption, alternative sources of energy, improved health care, and improved climate models. Extreme scale CFD poses several cross-disciplinary challenges e.g. algorithmic issues in scalable solver design, handling of extreme sized data with compression and in-situ analysis, resilience and energy awareness in both hardware and algorithm design. The wide range of topics makes exascale CFD relevant to a wider HPC audience, extending outside the traditional fluid dynamics community.

This minisymposium will be organized by the EU funded Horizon 2020 project ExaFLOW together with leading CFD experts from industry and will feature presentations showcasing their work on addressing key algorithmic challenges in CFD in order to facilitate simulations at exascale, e.g. accurate and scalable solvers, strategies to ensure fault tolerance and resilience. This session aims at bringing together the CFD community as a whole, from HPC experts to domain scientists, to discuss current and future challenges towards exascale fluid dynamics simulations and to facilitate international collaboration.

15:30 - 16:00
Towards Adaptive Meshes for the Spectral Element Code Nek5000
Presenter:
Philipp Schlatter (Royal Institute of Technology, Sweden)
+ Abstract
16:00 - 16:30
Delivering Performance Through HPC
Presenter:
Julien Hoessler (McLaren Racing Ltd, United Kingdom)
+ Abstract
16:30 - 17:00
Multilevel Diskless Checksum Checkpoints for Automated Application Recovery
Presenter:
Allan Nielsen (EPFL, Switzerland)
+ Abstract
17:00 - 17:30
Targeting the Spectral/hp Element Method for Exascale Platforms
Presenter:
David Moxey (Imperial College London, United Kingdom)
+ Abstract
Organizer(s):
Peter Bauer (ECMWF, United Kingdom)
Track(s):
Climate & Weather

Weather and climate prediction centers face enormous challenges due to the rising cost of energy associated with running complex high-resolution forecast models on more and more processors and the likelihood that Moore's law will soon reach its limit, with microprocessor feature density (and performance) no longer doubling every two years. But the biggest challenge to state-of-the-art computational services arises from its own software productivity shortfall. The application software at the heart of all prediction services throughout Europe is ill-equipped to efficiently adapt to the rapidly evolving heterogeneous hardware provided by the supercomputing industry. The solution is not to reduce the stringent requirements for Earth-system prediction but to combine scientific and computer-science expertise for defining and co-designing the necessary steps towards affordable, exascale high-performance simulations of weather and climate. The Energy-efficient and Scalable Algorithms for weather Prediction and Exascale (ESCAPE) projects brings together a consortium of weather prediction centres operating at global as well as European regional scales, university institutes performing research on numerical methods and novel code optimization techniques, HPC centres with vast experience in scalable code development and diverse processor technologies, large HPC hardware vendor companies operating market leading systems, as well as a European start-up SME with novel and emerging optical processor technologies to address the challenge of extreme-scale, energy-efficient high-performance computing. Key objectives of ESCAPE are to (i) define fundamental algorithm building blocks (“weather & climate dwarfs”) to foster trans-disciplinary research and innovation and to co-design, advance, benchmark and efficiently run the next generation of NWP and climate models on energy-efficient, heterogeneous HPC architectures, to (ii) diagnose and classify weather and climate dwarfs on different HPC architectures, and to (iii) combine frontier research on algorithm development and extreme-scale, high-performance computing applications with novel hardware technology, to create a flexible and sustainable weather and climate prediction system. This minisymposium will present the current state of prediction model component developments of weather and climate dwarfs within and beyond ESCAPE, and the implications on performance and employed programming models. This session acts in close collaboration with the minisymposium on ‘Programming models and abstractions for weather and climate models: Today and in the futures’.

15:30 - 16:00
Weather and Climate Dwarfs: Definition, Code Design and Performance
Presenter:
Andreas Mueller (ECMWF, United Kingdom)
+ Abstract
16:00 - 16:30
The MPDATA Advection Dwarf of the High-Resolution Regional COSMO-EULAG model
Presenter:
Zbigniew Piotrowski (Poznan Supercomputing and Networking Center, Poland)
+ Abstract
16:30 - 17:00
Multigrid Preconditioning of Elliptic Operators in High-Performance All-Scale Atmospheric Models
Presenter:
Mike Gillard (Loughborough University, United Kingdom)
+ Abstract
17:00 - 17:30
Parallel-In-Time Integration with PFASST: The Hyperbolic Case
Presenter:
Robert Speck (Forschungszentrum Jülich, Germany)
+ Abstract
Organizer(s):
Stefano Serra Capizzano (Insubria University, Italy)
Track(s):
Computer Science & Applied Mathematics

Isogeometric Analysis (IgA) is a recent but well established method for the analysis of problems governed by differential equations. Its goal is to reduce the gap between the worlds of Finite Element Analysis (FEA) and Computer Aided Design (CAD). One of the key ideas in IgA is to use a common spline representation model for the design as well as for the analysis, providing a true design-through-analysis methodology.

The IgA approach has been proved to be superior with respect to conventional FEA in various engineering application areas, including structural mechanics, electromagnetism, fluid-structure interaction. The keystones of this success are the many outstanding properties of the considered spline spaces and the associated B-spline basis. Spline representations allow for an efficient (geometric) manipulation, a high approximation power with respect to their degrees of freedom, appealing spectral properties, fast numerical linear algebra methods depending on spectral properties and/or tensor techniques.

The minisymposium will address the most recent research directions and results related to
1) analysis of spectral properties in concrete applications
2) fast numerical linear algebra methods in connection with Bsplines, NURBS, extended spaces, etc.

15:30 - 16:00
Symbol Approach in IgA Matrix Analysis: from the Spectral Analysis to the Design of Fast Solvers
Presenter:
Carlo Garoni (Università della Svizzera italiana, Switzerland)
+ Abstract
16:00 - 16:30
IGA for MagnetoHydroDynamics (MHD) Problems
Presenter:
Ahmed Ratnani (Max Planck Institute for Plasma Physics, Germany)
+ Abstract
16:30 - 17:00
Optimal and Robust Multigrid for Isogeometric Analysis
Presenter:
Hendrik Speleers (University of Rome Tor Vergata, Italy)
+ Abstract
17:00 - 17:30
Spectral Analysis of the 2D Curl-Curl (Stabilized) Operator with Applications to the Related Iterative Solutions
Presenter:
Mariarosa Mazza (Max Planck Institute for Plasma Physics, Germany)
+ Abstract
Organizer(s):
Katharina Kormann (Max Planck Institute for Plasma Physics, Germany)
Track(s):
Physics, Computer Science & Applied Mathematics

Kinetic simulations play an essential role towards understanding the dynamics of plasmas in the fields of nuclear fusion, laser plasma interaction, and astrophysics. The complexity of kinetic computations, in particular, their high dimensionality and multi-scale nature, lead to exciting challenges in physics, applied mathematics, and computer science. For example, modeling the plasma dynamics close to the edge of magnetic fusion devices requires codes which can flexibly handle complex geometries and implement enhanced gyrokinetic models or fully kinetic descriptions. Modern numerical tools such as multi-scale methods, structure-preserving schemes and isogeometric meshes, therefore, need to be adapted to plasma physics models in order to enhance state-of-the-art kinetic codes. At the same time, new programming models are necessary to prepare codes for the use on emerging heterogeneous HPC systems. This includes vectorization, cache efficient memory organization, task-based parallelism as well as new algorithms that are adapted to modern hardware. This minisymposium shall bring together scientists from physics, applied mathematics, and computer science to discuss current trends in the development of (gyro) kinetic codes.

15:30 - 16:00
A Conservative Exponential Integrator for the Drift-Kinetic Model
Presenter:
Lukas Einkemmer (University of Innsbruck, Austria)
+ Abstract
16:00 - 16:30
Time Discretization of a Geometric Electromagnetic Particle-In-Cell Method
Presenter:
Katharina Kormann (Max Planck Institute for Plasma Physics, Germany)
+ Abstract
16:30 - 17:00
The Particle-In-Fourier (PIF) Approach Applied to Gyrokinetic Simulations and its Implementation on GPUs
Presenter:
Noé Ohana (EPFL, Switzerland)
+ Abstract + Presentation
17:00 - 17:30
InKS, a Programming Model to Decouple Semantics from Optimizations in HPC and its Application to the 6D Semi-Lagrangian Advection Use-Case
Presenter:
Ksander Ejjaaouani (INRIA, France)
+ Abstract
Organizer(s):
Michele Ceriotti (EPFL, Switzerland),
Anatole von Lilienfeld (University of Basel, Switzerland)
Track(s):
Chemistry & Materials, Emerging Domains in HPC, Life Sciences

Within materials and cheminformatics, machine learning and inductive reasoning are known for their use in so-called structure property relationships. Despite a long tradition of these methods in pharmaceutical applications their overall usefulness for chemistry and materials science has been limited. Only over the last couple of years, a number of machine learning (ML) studies have appeared with the commonality that quantum mechanical or atomistically resolved properties are being analyzed or predicted based on regression models defined in compositional and configurational space. The atomistic framework is crucial for the unbiased exploration of this space since it enables, at least in principle, the free variation of chemical composition, atomic weights, structure, and electron number. Substantial CPU investments have to be made in order to obtain sufficient training data using atomistic simulation protocol. This minisymposium boasts four of the most active players in the field who share a common background in developing computationally demanding atomistic simulation methods, and who have contributed new and original work based on unsupervised (Ceriotti and Varma) as well as supervised (Ghiringhelli and von Lilienfeld) learning.

15:30 - 16:00
Big Data of Materials Science: Critical Role of the Descriptor
Presenter:
Luca Massimiliano Ghiringhelli (Fritz Haber Institute Berlin, Germany)
+ Abstract
16:00 - 16:30
Discerning Causalities in Protein Function using “Inverse” Machine Learning
Presenter:
Sameer Varma (University of South Florida, United States of America)
+ Abstract
16:30 - 17:00
Quantum Machine Learning
Presenter:
Anatole von Lilienfeld (University of Basel, Switzerland)
+ Abstract
17:00 - 17:30
Mapping Molecular Landscapes, from the Gas Phase to Crystals
Presenter:
Michele Ceriotti (EPFL, Switzerland)
+ Abstract
Organizer(s):
Matthias Bolten (University of Kassel, Germany),
Thomas Huckle (TU Munich, Germany)
Track(s):
Computer Science & Applied Mathematics

In the minisymposium "Parallel Numerical Linear Algebra" we will present two major problems. The first part concentrates on dense eigenvalue solvers and is based on the work of the ELPA-AEO project. The underlying problems are Hermitian generalized eigenvalue problems and the parallel computation of a large part of the spectrum. The talks will present theoretical results and practical implementations. The second topic is the parallel solution of systems of linear equations. Here, the first talk will consider the parallelization of smoothers in Multigrid methods. The second talk will present parallel preconditioners based on ILU; here, the resulting sparse triangular systems are preconditioned and can be solved iteratively to achieve efficient parallel methods.

15:30 - 16:00
Improving Scalability and Efficiency of Multigrid Methods by Block-Smoothers
Presenter:
Matthias Bolten (University of Kassel, Germany)
+ Abstract
16:00 - 16:30
Iterative Parallel Methods for Deriving ILU-type Preconditioners
Presenter:
Thomas Huckle (TU Munich, Germany)
+ Abstract
16:30 - 17:00
Efficient Reduction of Generalized hpd Eigenproblems
Presenter:
Valeriy Manin (University of Wuppertal, Germany)
+ Abstract
17:00 - 17:30
Efficient Transformation of the General Eigenproblem with Symmetric Banded Matrices to a Banded Standard Eigenproblem
Presenter:
Michael Rippl (TU Munich, Germany)
+ Abstract
17:30 - 18:00
Coffee Break, Foyer

PNL01 Beyond Moore's Law

Chair:
John Shalf (Lawrence Berkeley National Laboratory, United States of America)

By most accounts, we are nearing the limits of conventional photolithography processes. It will be challenging to continue to shrink feature sizes smaller than 5nm and still realize any performance improvement for digital electronics in silicon. At the current rate of development, the purported “End of Moore’s Law” will be reached in the middle to end of next decade.

Shrinking the feature sizes of wires and transistors has been the driver for Moore’s Law for the past 5 decades, but what might lie beyond the end of current lithographic roadmaps and how will it affect computing as we know it? Moore’s Law is an economic theory after all, and any option that can make future computing more capable each new generation (by some measure) could continue Moore’s economic theory well into the future.

The goal of this panel session is to communicate the options for extending computing beyond the end of our current silicon lithography roadmaps. The correct answers may be found in new ways to extend digital electronics efficiency or capability, or even new models of computation such as neuromorphic and quantum.

18:15 - 18:30
Directing Matter: Toward Atomic-Scale 3D Nanofabrication for Beyond Moore’s Law Devices
Presenter:
Olga Ovchinnikova (Oak Ridge National Laboratory, United States of America)
+ Abstract + Biography
18:30 - 18:45
Opportunities and Challenges in the Area of Quantum Computing and Quantum Simulation
Presenter:
Thomas Lippert (Forschungszentrum Jülich, Germany)
+ Abstract + Biography
18:45 - 19:00
Neuromorphic Computing – Achievements, Opportunities and Plans
Presenter:
Karlheinz Meier (University of Heidelberg, Germany)
+ Abstract + Biography
19:30 - 22:00
Social evening event (separate registration required), Ristorante Ciani
Chair:
Thomas Quinn (University of Washington, United States of America)
09:00 - 09:50
IP03 Unlocking the Mysteries of the Universe with Supercomputers
Presenter:
Katrin Heitmann (University of Chicago, United States of America)
+ Abstract + Biography

Flash Poster Session

Chair:
Maria Grazia Giuffreda (ETH Zurich / CSCS, Switzerland)

The aim of this session is to allow poster presenters to introduce the topic of their poster and motivate the audience to visit them at the evening poster session. Authors will be strictly limited to 40 seconds each - after this time the presentation will be stopped automatically.

10:30 - 11:00
Coffee Break
11:00 - 13:00
Minisymposia and Papers Sessions
Chair:
Daniel Jacobson (Oak Ridge National Laboratory, United States of America)
11:00 - 11:30
ABCpy: A User-Friendly, Extensible, and Parallel Library for Approximate Bayesian Computation
Presenter:
Marcel Schoengens (ETH Zurich / CSCS, Switzerland)
Track(s):
Engineering
+ Abstract + Paper
11:30 - 12:00
Parallelized Dimensional Decomposition for Large-Scale Dynamic Stochastic Economic Models
Presenter:
Aryan Eftekhari (Università della Svizzera italiana, Switzerland)
Track(s):
Emerging Domains in HPC
+ Abstract + Paper
12:00 - 12:30
A Histogram-Free Multicanonical Monte Carlo Algorithm for the Basis Expansion of Density of States
Presenter:
Ying Wai Li (Oak Ridge National Laboratory, United States of America)
Track(s):
Physics
+ Abstract + Paper
Organizer(s):
Jonas Thies (German Aerospace Center, Germany),
Gerhard Wellein (Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany)
Track(s):
Computer Science & Applied Mathematics, Physics

The computation of bulks of inner eigenpairs of large sparse matrices is known to be both an algorithmic challenge and resource intensive in terms of compute power. As the compute capabilities have continuously increased over the past decades, computational models or applications requiring information about inner eigenstates of sparse matrices have become numerically accessible in many research fields. At the same time new algorithms (e.g. FEAST or SSM) have been introduced and long-standing methods such as filter diagonalization are still being applied, improved and extended. However, the trend towards highly parallel (heterogeneous) compute systems is challenging the efficiency of existing solver packages as well as building block libraries, and calls for new massively parallel solvers with high hardware efficiency across different architectures. Thus, substantial effort is put into the implementation of new sparse (eigen)solver frameworks which face challenges in terms of ease of use, extendibility, sustainability and hardware efficiency. Software engineering and holistic performance engineering concepts are deployed to address these challenges. The significant momentum in the application fields, numerical methods and software layers calls for a strong interaction between scientists involved in those activities to provide sustainable and hardware efficient frameworks to compute inner eigenvalues of large sparse matrices. The minisymposium offers a platform to bring together leading experts in this field to discuss recent developments at all levels: from the application down to hardware efficient implementations of basic kernel operations. Application experts will present their current and upcoming research requiring the computation of inner eigenvalues. State-of-the-art eigensolvers and new algorithmic developments will be discussed along with challenges faced by library developers in terms of software sustainability and hardware efficiency. Many of these topics are not limited to the inner sparse eigenvalue problems but are of general interest for sparse linear algebra algorithms for current and future HPC architectures.

Part 1 of the minisymposium focuses on applications and algorithms.

11:00 - 11:30
Interior Eigenvalue and Eigenvalue Density Computations in Quantum Physics Applications
Presenter:
Andreas Alvermann (University of Greifswald, Germany)
+ Abstract
11:30 - 12:00
Nonlinear Sakurai-Sugiura Method for Electronic Transport Calculation on KNL Cluster
Presenter:
Tetsuya Sakurai (University of Tsukuba, Japan)
+ Abstract
12:00 - 12:30
Extremely Large Quantum Material Simulations with Novel Linear Algebraic Algorithms and Massively Parallel Supercomputers
Presenter:
Takeo Hoshi (Tottori University, Japan)
+ Abstract
12:30 - 13:00
Rational and Polynomial Filtering for Eigenvalue Problems and the EVSL Project
Presenter:
Yousef Saad (University of Minnesota, United States of America)
+ Abstract
Organizer(s):
Markus Huber (TU Munich, Germany)
Track(s):
Computer Science & Applied Mathematics

In this series of two minisymposia, a special focus lies on providing a platform for exchanging ideas about scalable, memory-efficient, fast and resilient solving techniques. These characteristics are crucial for science and engineering-driven applications making use of exascale computing like geophysics, astrophysics, aerodynamics, etc. Algorithms in high-performance computing require a rethinking of the standard approaches to ensure on the one hand the full usage of the future computing power, and on the other hand energy-efficiency. A careful implementation of all performance-relevant parts and an intelligent combination with external libraries is fundamental for exascale computation. Multigrid and domain decomposition play an important role in many scientific applications and have often developed ideas separately using the immense compute power of supercomputers. The minisymposia address both communities and focus on exchanging current research progress related to exascale enabled solving techniques.

11:00 - 11:30
Novel Applications of Multigrid Type Methods in Uintah Approaches to Exascale
Presenter:
Martin Berzins (University of Utah, United States of America)
+ Abstract + Presentation
11:30 - 12:00
Linear Solvers at Scale: Numerical Scalability and Resiliency Inside!
Presenter:
Matthieu Kuhn (INRIA, France)
+ Abstract
12:00 - 12:30
Fast and Scalable Multigrid Solver Employing Agglomeration Techniques
Presenter:
Markus Huber (TU Munich, Germany)
+ Abstract
12:30 - 13:00
Multigrid Preconditioners for High Order Finite Element Discretisations
Presenter:
Dave A. May (University of Oxford, United Kingdom)
+ Abstract
Organizer(s):
Stephan Brunner (EPFL, Switzerland)
Track(s):
Physics, Computer Science & Applied Mathematics

Kinetic simulations play an essential role towards understanding the dynamics of plasmas in the fields of nuclear fusion, laser plasma interaction, and astrophysics. The complexity of kinetic computations, in particular, their high dimensionality and multi-scale nature, lead to exciting challenges in physics, applied mathematics, and computer science. For example, modeling the plasma dynamics close to the edge of magnetic fusion devices requires codes which can flexibly handle complex geometries and implement enhanced gyrokinetic models or fully kinetic descriptions. Modern numerical tools such as multi-scale methods, structure-preserving schemes and isogeometric meshes, therefore, need to be adapted to plasma physics models in order to enhance state-of-the-art kinetic codes. At the same time, new programming models are necessary to prepare codes for the use on emerging heterogeneous HPC systems. This includes vectorization, cache efficient memory organization, task-based parallelism as well as new algorithms that are adapted to modern hardware. This minisymposium shall bring together scientists from physics, applied mathematics, and computer science to discuss current trends in the development of (gyro)kinetic codes.

11:00 - 11:30
Computational Challenges Towards Strong Scaling Gyrokinetic Eulerian Codes at Exa-scale
Presenter:
Yasuhiro Idomura (Japan Atomic Energy Agency, Japan)
+ Abstract
11:30 - 12:00
Status of the Exascale Computing Project on High-Fidelity Whole Device Modeling of Magnetically Confined Fusion Plasma
Presenter:
Stephane Ethier (Princeton Plasma Physics Lab, United States of America)
+ Abstract
12:00 - 12:30
Gyrokinetic Global Electromagnetic Simulations with ORB5 and GENE.
Presenter:
Natalia Tronko (Max Planck Institute for Plasma Physics, Germany)
+ Abstract
12:30 - 13:00
Beyond Electrodynamics with PIConGPU: Performance Portable, Open Multi-Physics HPC Simulations for Laser-Plasma Experiments at the European XFEL
Presenter:
Axel Huebl (Helmholtz-Zentrum Dresden-Rossendorf, Germany)
+ Abstract
Organizer(s):
Marta Bon (ETH Zurich, Switzerland)
Track(s):
Chemistry & Materials, Computer Science & Applied Mathematics

The design of materials for energy production and storage is a subject of great scientific and technological interest and its potential impact on society is very great indeed. The study of such systems is however rather challenging since the systems are complicated. Modeling of such systems poses a challenge since one deals with systems in which reactions take place at surfaces in the presence of highly disordered environments. Car-Parrinello type simulations combined with ab-initio methods are of course needed. In this minisymposium we invite specialists to discuss the perculiar challenges in this field. The issues that we expect to cover will be oxidation processes in solution, electron transfer, morphology and chemistry at the interfaces.

11:00 - 11:30
Accelerating Rare Events in Electrochemistry
Presenter:
Marta Bon (ETH Zurich, Switzerland)
+ Abstract
11:30 - 12:00
Computational Studies of Perovskite Solar Cell Materials
Presenter:
Ursula Rothlisberger (EPFL, Switzerland)
+ Abstract
12:00 - 12:30
Nature-Inspired Water Splitting for Sustainable Hydrogen Production
Presenter:
Sandra Luber (University of Zurich, Switzerland)
+ Abstract
12:30 - 13:00
Ab Initio Molecular Dynamics of Liquids: From CO2 Conversion to Osmotic Energy Conversion
Presenter:
Rodolphe Vuilleumier (Pierre and Marie Curie University, France)
+ Abstract
Organizer(s):
Igor Pivkin (Università della Svizzera italiana, Switzerland)
Track(s):
Life Sciences, Computer Science & Applied Mathematics

The main objective of this symposium is to bring international scientists together working in the area of particle-based modeling with applications in Life Sciences, Fluids, and Materials. Numerical methods include but are not restricted to Coarse-Graining Molecular Dynamics (CG-MD), Dissipative Particle Dynamics (DPD), Smoothed Dissipative Particle Dynamics (SDPD), Smoothed Particle Hydrodynamics (SPH), Lattice-Boltzmann Method (LBM), Moving Particle Semi-Implicit Method (MPS), Brownian Dynamics (BD) and Stokesian Dynamics (SD). The goal of minisymposium is, on one hand, to share state-of-the-art results in various applications of particle-based methods and, on the other to discuss technical issues of the computational modeling.

11:00 - 11:30
An Unstructured-Mesh Generator Based on SPH Analogy
Presenter:
Lin Fu (TU Munich, Germany)
+ Abstract
11:30 - 12:00
Smoothed Dissipative Particle Dynamics with Angular Momentum Conservation
Presenter:
Dmitry Fedosov (Forschungszentrum Jülich, Germany)
+ Abstract
12:00 - 12:30
Hybrid Simulation of Turbulent Wake Vortices with Rain
Presenter:
Philippe Billuart (Université catholique de Louvain, Belgium)
+ Abstract
12:30 - 13:00
Coarse-Grained Protein Model for Dissipative Particle Dynamics
Presenter:
Igor V. Pivkin (Università della Svizzera italiana, Switzerland)
+ Abstract
Organizer(s):
Peter Bauer (ECMWF, United Kingdom),
Oliver Fuhrer (MeteoSwiss, Switzerland)
Track(s):
Climate & Weather

Progress in weather and climate modeling is tightly linked to the increase in computing resources available for such models. Emerging heterogeneous high-performance architectures are a unique opportunity to address these requirements in an energy- and time-efficient manner. The hardware changes of emerging computing platforms are accompanied by dramatic changes in programming paradigms. These changes have only just started. Adapting current weather and climate codes to efficiently exploit such architectures requires an effort which is both costly and error-prone. The long software lifecycles of weather and climate codes renders the situation even more critical, as hardware life cycles are much shorter in comparison. Furthermore, atmospheric models are developed and used by a large variety of researchers on a myriad of computing platforms, which makes portability a crucial requirement in any kind of development. Developers of weather and climate models are struggling to achieve a better separation of concerns in order to separate the high-level specification of equations and solution algorithms from the hardware dependent, optimized low-level implementation. It is probable that the solutions to achieve this will be different for different parts of the codes, due to different predominant algorithmic motifs and data-structures. On the example of concrete porting efforts, this session will illustrate different approaches used today and (possibly) in the future.

11:00 - 11:30
The CLAW Compiler: Abstractions for Weather and Climate Models
Presenter:
Valentin Clément (ETH Zurich, Switzerland)
+ Abstract + Presentation
11:30 - 12:00
Performance Portable Acceleration of Weather and Climate Dwarfs
Presenter:
Peter Messmer (NVIDIA Inc., Switzerland)
+ Abstract
12:00 - 12:30
Performance Portable Dynamical Cores on Irregular Grids Using Domain Specific Languages
Presenter:
Carlos Osuna Escamilla (MeteoSwiss, Switzerland)
+ Abstract
12:30 - 13:00
LFRic and PSyclone: A Domain Specific Language Approach to Atmospheric Models for Exascale
Presenter:
Christopher Maynard (Met Office, United Kingdom)
+ Abstract
Organizer(s):
Anouar Benali (Argonne National Laboratory, United States of America)
Track(s):
Chemistry & Materials

The ability to computationally design, optimize, or understand the properties of energy relevant materials is fundamentally contingent on the existence of methods to accurately, efficiently and reliably simulate them. Quantum mechanics based approaches must necessarily serve as a foundational role, since only these approaches can describe matter in a truly first-principles (parameter-free) and therefore robust manner. Quantum Monte Carlo (QMC) methods are ideal candidates for this since they robustly deliver highly accurate calculations of complex materials, and with increased computer power provide systematically improvable accuracies that are not possible with other first principles methods. By directly solving the Schrödinger equation and by treating the electrons at a consistent many-body level, these methods can be applied to general elements and materials, and are unique in satisfying robust variational principles. More accurate solutions result in lower variational energies, enabling robust confidence intervals to be assigned to predictions. The stochastic nature of QMC facilitates mapping onto high-performance computing architectures. QMC is one of the few computational materials methods capable of fully exploiting today’s petaflop machines.

This symposium will present some of the latest developments on QMC methods, from an application and a development perspective.

11:00 - 11:30
Stochastic Multi-Reference Perturbation Theory
Presenter:
Michel Caffarel (CNRS, France)
+ Abstract
11:30 - 12:00
High-Performance Computing of Accurate Electronic Structures from Model Space Quantum Monte Carlo
Presenter:
Seiichiro Ten-no (Kobe University, Japan)
+ Abstract
12:00 - 12:30
Preparing Quantum Monte Carlo for the Exascale
Presenter:
Luke Shulenburger (Sandia National Laboratories, United States of America)
+ Abstract
12:30 - 13:00
New Algorithms in QMCPACK; Science Applications in the Path to Exascale
Presenter:
Anouar Benali (Argonne National Laboratory, United States of America)
+ Abstract
Organizer(s):
Rainald Ehrig (Zuse Institute Berlin, Germany),
Susanna Röblitz (Freie Universität Berlin, Germany)
Track(s):
Emerging Domains in HPC

Human fertility is based on physiological events like adequate follicle maturation, ovulation, ovum fertilization, corpus luteum formation as well as endometrial implantation, proceeding in a chronological order. Diseases such as endometriosis or the polycystic ovary syndrome seriously disturb menstrual cycle patterns, oocyte maturation and consequently fertility. Besides endocrine diseases, several environmental and lifestyle factors, especially smoking and obesity, also have a negative impact on fertility. Modern techniques in reproductive medicine like in-vitro fertilization or intracytoplasmic sperm injection have increased the chances for successful reproduction. However, current success rates vary significantly among clinics, still reaching only about 35% even in well-functioning centers. This is mainly due to the usage of different treatment protocols and limited knowledge about individual variability in the dynamics of reproductive processes.

This minisymposium brings together researchers with different scientific backgrounds (computer science, mathematics, medicine) who work on developing model-based clinical decision support systems for reproductive endocrinologists enabling the simulation and optimization of treatment strategies in-silico. Virtual physiological human (VPH) models together with patient specific parameterizations (Virtual Patients), formalized treatment strategies (Virtual Doctors) and software tools (Virtual Hospital) enable in silico clinical trials, that is clinical trials performed by means of computer simulations over a population of virtual patients. In silico clinical trials are recognized as a disruptive key innovation for medicine, as they allow medical scientists to reduce and postpone invasive, risky, costly and time-consuming in vivo experiments of new treatments to much later stages of the testing process, when a deeper knowledge of their effectiveness and side-effects has been acquired via simulations.

The talks in the minisymposium will highlight different aspects of a virtual hospital in reproductive medicine.

- Distributed service oriented systems for clinical decision support
- HPC within in silico clinical trials
- Construction of large virtual patient populations
- Formalization of treatment strategies in silico
- VPH model validation, treatment verification and the design of individualized protocols
- Databases and software for the virtual hospital
- Large scale integrated physiology models

11:00 - 11:30
HPC within In Silico Clinical Trials: PAEON experience
Presenter:
Enrico Tronci (Sapienza University, Italy)
+ Abstract
11:30 - 12:00
Building Clinical Decision Support Systems (CDSS) in RoR: Lessons from the PAEON Project
Presenter:
Fabian Ille (Lucerne University of Applied Sciences and Arts, Switzerland)
+ Abstract
12:00 - 12:30
A Clinical Decision Support System for Hormonal Treatments in Reproductive Endocrinology
Presenter:
Rainald Ehrig (Zuse Institute Berlin, Germany)
+ Abstract
12:30 - 13:00
Integrated Physiology in Modelica: HumMod and Physiomodel
Presenter:
Jiří Kofránek (Charles University, Czech Republic)
+ Abstract
13:00 - 14:00
Lunch, Foyer
14:00 - 16:00
Minisymposia and Papers Sessions
Chair:
Olaf Schenk (Università della Svizzera italiana, Switzerland)
14:00 - 14:30
Fast and Scalable Low-Order Implicit Unstructured Finite-Element Solver for Earth's Crust Deformation Problem
Presenter:
Kohei Fujita (University of Tokyo, Japan)
Track(s):
Solid Earth Dynamics
+ Abstract + Paper
14:30 - 15:00
Load Balancing and Patch-Based Parallel Adaptive Mesh Refinement for Tsunami Simulation on Heterogeneous Platforms Using Xeon Phi Coprocessors
Presenter:
Chaulio Resende Ferreira (TU Munich, Germany)
Track(s):
Computer Science & Applied Mathematics
+ Abstract + Paper + Presentation
15:00 - 15:30
A Scalable Object Store for Meteorological and Climate Data
Presenter:
Simon D. Smart (ECMWF, United Kingdom)
Track(s):
Climate & Weather
+ Abstract + Paper + Presentation
Organizer(s):
Yoshitaka Tateyama (National Institute for Materials Science, Japan)
Track(s):
Chemistry & Materials

Interfaces (solid-solid, solid-liquid, solid-gas as well as liquid-gas) give a variety of interesting and crucial functions in condensed-matter physics and chemistry. The space-charge layer plays an important role in semiconductor physics, leading to several fundamentals of electronic devices. On the other hand, the electrochemistry always takes into account the electric double layer, crucial for catalysis, solar cell and battery applications. These modulations of charge-carrier distributions can expand up to micrometer scale, though nanometer-scale modulation happens as well in several cases. Therefore, sole first-principles electronic structure calculation does not work well. For these issues, special techniques to deal with the interface are necessary, on top of the large-scale and long-time QM-based simulations. QM/MM techniques or combinations of QM and continuum or classical theories can be potential solutions. This minisymposium brings together cutting-edge researchers working on these issues to discuss and evaluate individual methods and suggest future directions. This is important since relationships among the methods for the interfaces are difficult to see. Besides, the material-science flavor of this minisymposium provides perspectives addressing computer scientists and applied mathematicians, which will encourage future developments of interdisciplinary techniques.

14:00 - 14:30
Searching Stable Interface Structures with Bayesian Optimization
Presenter:
Koji Tsuda (University of Tokyo, Japan)
+ Abstract
14:30 - 15:00
Combining Ab Initio Calculations with Electrostatic Models to Describe Defects at Surfaces at Realistic Temperature, Pressure, and Doping Conditions
Presenter:
Sergey V. Levchenko (Fritz Haber Institute Berlin, Germany)
+ Abstract
15:00 - 15:30
First-Principles Simulation of Electrochemical Reactions at Solid-Liquid Interface
Presenter:
Minoru Otani (National Institute of Advanced Industrial Science and Technology, Japan)
+ Abstract
15:30 - 16:00
Challenges in Electronic Structure Modeling of Battery Interfaces
Presenter:
Kevin Leung (Sandia National Laboratories, United States of America)
+ Abstract
Organizer(s):
Bernd Bruegmann (Friedrich-Schiller-Universität Jena, Germany),
Luciano Rezzolla (Goethe University, Germany)
Track(s):
Physics

With a mass larger than that of the Sun compressed into an almost perfect sphere with a radius of only a dozen kilometers, neutron stars are the most compact material astrophysical objects we know. In their cores, particles are squeezed together more tightly than in atomic nuclei and no terrestrial experiment can reproduce the extreme physical conditions of density, temperature, and gravity. With such properties, it is clear that neutron stars in binary systems are unique laboratories to explore fundamental physics – such as the state of matter at nuclear densities – and fundamental astrophysics – such as the one behind the “central engine” in short gamma-ray bursts. Yet, such an exploration does not come easy. The nonlinear dynamics of binary neutron stars, which requires the combined solution of the Einstein equations together with those of relativistic hydrodynamics and magnetohydrodynamics, and the complex microphysics that accompanies the inspiral and merger, make sophisticated numerical simulations in three dimensions the only route for an accurate modeling.

This minisymposium will focus on the gravitational-wave emission during the inspiral and the connection between merging binaries and the corresponding electromagnetic counterpart. These two problems require urgent attention as they are both likely to play an important role in the imminent detection of gravitational waves from binary neutron stars by interferometric detectors such as LIGO and Virgo.

14:00 - 14:30
Simulating Generic Binary Neutron Star Mergers
Presenter:
Tim Dietrich (Max Planck Institute for Gravitational Physics, Germany)
+ Abstract
14:30 - 15:00
Entropy-Limited Hydrodynamics: A Novel Approach to Relativistic Hydrodynamics
Presenter:
Federico Guercilena (Goethe University, Germany)
+ Abstract
15:00 - 15:30
Multi-Messenger Signals from Gravitational Wave Sources
Presenter:
Stephan Rosswog (Stockholm University, Sweden)
+ Abstract
15:30 - 16:00
Numerical Simulations of the Magnetic Field Amplification in Neutron Stars
Presenter:
Pablo Cerda-Duran (University of Valencia, Spain)
+ Abstract
Organizer(s):
Jonas Thies (German Aerospace Center, Germany),
Gerhard Wellein (Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany)
Track(s):
Computer Science & Applied Mathematics, Physics

The computation of bulks of inner eigenpairs of large sparse matrices is known to be both an algorithmic challenge and resource intensive in terms of compute power. As the compute capabilities have continuously increased over the past decades, computational models or applications requiring information about inner eigenstates of sparse matrices have become numerically accessible in many research fields. At the same time new algorithms (e.g. FEAST or SSM) have been introduced and long-standing methods such as filter diagonalization are still being applied, improved and extended. However, the trend towards highly parallel (heterogeneous) compute systems is challenging the efficiency of existing solver packages as well as building block libraries and calls for new massively parallel solvers with high hardware efficiency across different architectures. Thus, substantial effort is put into the implementation of new sparse (eigen)solver frameworks which face challenges both in terms easy to use, extendibility, sustainability, and hardware efficiency. Software engineering and holistic performance engineering concepts are deployed to address these challenges. The significant momentum in the application fields, numerical methods, and software layers calls for a strong interaction between scientists involved in those activities to provide sustainable and hardware efficient frameworks to compute inner eigenvalues of large sparse matrices. The minisymposium offers a platform to bring together leading experts in this field to discuss recent developments at all levels: from the application down to hardware efficient implementations of basic kernel operations. Application experts will present their current and upcoming research fields requiring the computation of inner eigenvalues. State-of-the-art eigensolvers and new algorithmic developments will be discussed along with challenges faced by library developers in terms of software sustainability and hardware efficiency. Many of these topics are not limited to the inner sparse eigenvalue problems but are of general interest for sparse linear algebra algorithms for current and future HPC architectures.

Part 2 of the minisymposium focuses on algorithms as well as software and performance aspects.

14:00 - 14:30
Parallel Preconditioned Iterative Solvers on Manycore Architectures
Presenter:
Kengo Nakajima (The University of Tokyo, Japan)
+ Abstract
14:30 - 15:00
Batched Factorization and Inversion Routines for Block-Jacobi Preconditioning on GPUs
Presenter:
Hartwig Anzt (University of Tennessee, United States of America)
+ Abstract
15:00 - 15:30
Software and Performance Engineering for Iterative Eigensolvers
Presenter:
Jonas Thies (German Aerospace Center, Germany)
+ Abstract
15:30 - 16:00
An Overview of the Trilinos Project Exascale Strategy: From Algorithms to Software Products
Presenter:
Michael A. Heroux (Sandia National Laboratories, United States of America)
+ Abstract
Organizer(s):
Simone Deparis (EPFL, Switzerland),
Axel Klawonn (University of Cologne, Germany),
Oliver Rheinbach (Freiberg University of Mining and Technology, Germany)
Track(s):
Emerging Domains in HPC, Life Sciences

Modeling and simulation of problems in cardiovascular mechanics can contribute significantly to the development of the field of precision medicine for cardiovascular and systemic phenomena. The relevant models of the related multiphysics problems can only be numerically simulated through the efficient use of modern techniques from computational mathematics, mechanics, and high-performance computing. Problems addressed in this minisymposium include reentry dynamics in cardiac electromechanical models, early atherosclerosis progression, and fluid-structure interaction using realistic arterial wall material models. This minisymposium aims at gathering researchers and experts in computational modeling and simulation of the heart and the systemic circulation.

14:00 - 14:30
Domain-Decomposition-Based Fluid Structure Interaction Methods using Nonlinear Anisotropic Arterial Wall Models
Presenter:
Alexander Heinlein (University of Cologne, Germany)
+ Abstract
14:30 - 15:00
A Multiphysics Approach for Early Atherosclerosis Progression
Presenter:
Moritz Thon (TU Munich, Germany)
+ Abstract
15:00 - 15:30
An Integrated Electro-Mechano-Fluid Model for Cardiac Simulations
Presenter:
Antonello Gerbi (EPFL, Switzerland)
+ Abstract
15:30 - 16:00
Scalable Domain Decomposition Solvers for Cardiac Electro-Mechanical Dynamics
Presenter:
Luca F. Pavarino (University of Pavia, Italy)
+ Abstract
Organizer(s):
Laurent Villard (EPFL, Switzerland)
Track(s):
Physics, Computer Science & Applied Mathematics

Kinetic simulations play an essential role towards understanding the dynamics of plasmas in the fields of nuclear fusion, laser plasma interaction, and astrophysics. The complexity of kinetic computations, in particular, their high dimensionality and multi-scale nature, lead to exciting challenges in physics, applied mathematics, and computer science. For example, modeling the plasma dynamics close to the edge of magnetic fusion devices requires codes which can flexibly handle complex geometries and implement enhanced gyrokinetic models or fully kinetic descriptions. Modern numerical tools such as multi-scale methods, structure-preserving schemes, and isogeometric meshes, therefore, need to be adapted to plasma physics models in order to enhance state-of-the-art kinetic codes. At the same time, new programming models are necessary to prepare codes for the use on emerging heterogeneous HPC systems. This includes vectorization, cache efficient memory organization, task-based parallelism as well as new algorithms that are adapted to modern hardware. This minisymposium shall bring together scientists from physics, applied mathematics, and computer science to discuss current trends in the development of (gyro) kinetic codes.

14:00 - 14:30
Scalable and Fault-Tolerant Simulations with GENE and the Sparse Grid Combination Technique
Presenter:
Mario Heene (University of Stuttgart, Germany)
+ Abstract
14:30 - 15:00
Two-Species Semi-Lagrangian Simulations for Solving the Vlasov-Poisson System in 2d2v
Presenter:
Yann Barsamian (University of Strasbourg, France)
+ Abstract
15:00 - 15:30
Hybrid OpenMP/MPI Parallelization of the Charge Deposition Step in the Global Gyrokinetic Particle-In-Cell Code ORB5
Presenter:
Emmanuel Lanti (EPFL, Switzerland)
+ Abstract + Presentation
15:30 - 16:00
Prediction of the Node to Node Communication Costs of a New Gyrokinetic Code with Toroidal Domain
Presenter:
Andreas Jocksch (ETH Zurich / CSCS, Switzerland)
+ Abstract
Organizer(s):
Franziska Erlekam (Zuse Institute Berlin, Germany)
Track(s):
Computer Science & Applied Mathematics, Life Sciences

This minisymposium will bring together researchers who use molecular simulation in their respected fields, in order to discuss recent advances and to exchange experiences and ideas. The focus of this minisymposium is the analysis of very huge biological and chemical data sets arising from the simulation of complex molecular systems, by means of developing efficient algorithms and their implementation on high-performance supercomputers.

This approach is necessary for designing smart drug-like molecules for Precision Medicine. The tools include, but are not limited to, algebraic stochastic dimension reduction methods such as nonnegative matrix decomposition for very large data sets obtained from atomic spectroscopy, Markov State Models (MSMs), Multiscale Methods in Time and Space for studying molecular conformation, PDEs for the analysis of multivalent binding kinetics for biochemical systems, and spectral clustering.

14:00 - 14:30
Kinetics of Multivalent Bindings
Presenter:
Franziska Erlekam (Zuse Institute Berlin, Germany)
+ Abstract
14:30 - 15:00
Generalized Perron Cluster Analysis for Non-Equilibrium Steady State Systems
Presenter:
Franziska Erlekam (Zuse Institute Berlin, Germany)
+ Abstract
15:00 - 15:30
Markov State Models with Reweighting
Presenter:
Luca Donati (Freie Universität Berlin, Germany)
+ Abstract
15:30 - 16:00
Applications of Nonnegative Matrix Factorization in Spectroscopy
Presenter:
Amir Niknejad (College of Mount Saint Vincent, United States of America)
+ Abstract
Organizer(s):
Igor Pivkin (Università della Svizzera italiana, Switzerland)
Track(s):
Life Sciences, Computer Science & Applied Mathematics

The main objective of this symposium is to bring international scientists together working in the area of particle-based modeling with applications in Life Sciences, Fluids, and Materials. Numerical methods include but are not restricted to Coarse-Graining Molecular Dynamics (CG-MD), Dissipative Particle Dynamics (DPD), Smoothed Dissipative Particle Dynamics (SDPD), Smoothed Particle Hydrodynamics (SPH), Lattice-Boltzmann Method (LBM), Moving Particle Semi-Implicit Method (MPS), Brownian Dynamics (BD) and Stokesian Dynamics (SD). The goal of minisymposium is, on one hand, to share state-of-the-art results in various applications of particle-based methods and, on the other to discuss technical issues of the computational modeling.

14:00 - 14:30
Bayesian Uncertainty Quantification and Propagation for Dissipative Particle Dynamics
Presenter:
Lina Kulakova (ETH Zurich, Switzerland)
+ Abstract
14:30 - 15:00
Combined Computational and Experimental Study of Cell Deformation in Microfluidic Devices
Presenter:
Kirill Lykov (Università della Svizzera italiana, Switzerland)
+ Abstract
15:00 - 15:30
Adaptive Resolution Simulations Coupling Atomistic to Supramolecular Water Models
Presenter:
Matej Praprotnik (National Institute of Chemistry, Slovenia)
+ Abstract
15:30 - 16:00
A Hybrid Smoothed Dissipative Particle Dynamics and Immersed Boundary Method (SDPD-IBM) for Simulation of Red Blood Cells (RBCs) in Flows
Presenter:
Ting Ye (Jilin University, China)
+ Abstract
Organizer(s):
Karla Morris (Sandia National Laboratories, United States of America)
Track(s):
Computer Science & Applied Mathematics, Engineering

This minisymposium lies at the interface of computer science and applied mathematics, presenting recent advances in methods, ideas and algorithms addressing resilience for extreme-scale computing.

Extreme scale systems are expected to exhibit more frequent system faults due to both hardware and software, making resilience a key problem to face. On the hardware side, challenges will arise due to the expected increase in the number of components, variable operational modes (e.g. lower voltage to address energy requirements), and increasing complexity (e.g. memory hierarchies, heterogeneous cores, more, smaller transistors). The software stack will need to keep up with the increasing hardware complexity, hence becoming itself more error-prone.

In general, we can distinguish between three main categories of faults, namely hard (where a hardware component fails and needs to be fixed/replaced), soft/transient (a fault occurs, but is corrected by the hardware or low-level system software), and silent/undetectable (an error occurs but cannot be detected and fixed). The first two categories have a well-defined impact on the run and the system itself. The third class is more subtle because its effect is simply to alter stored, transmitted, or processed information, and there is no opportunity for an application to directly recover from a fault. This can lead to noticeable impacts such as crashes and hangs, as well as corrupted results.

Current systems do not have an integrated approach to fault tolerance, namely the various subsystems have their own mechanisms for error detection and recovery (e.g. ECC memory). Also, there is no good error isolation, e.g. the failure of any component in a parallel job generally causes the entire job to fail. In fact, the current standard of the Message Passing Interface (MPI) system does not support failing ranks. Common approaches for fault-tolerance include hardware-level redundancy, algorithmic error correction code and checkpoint/restart. The latter is currently the most widely used approach. However, the tight power budget targeted for future systems and the expected shortening of the mean-time-between-failures (MTBF) may cause it to become an unfeasible solution for extreme-scale computing.

It is increasingly more recognized that hardware-only resilience will likely become unfeasible in the long term. This sets the need to develop an integrated approach, where resilience is tackled across all layers to mitigate the impact of faults in a holistic fashion, keeping under consideration its interplay with the energy budget. Hence, in parallel to the continuous effort aimed at improving resilience for hardware and system software, new approaches/ideas need to be incorporated at the highest level, i.e. algorithms and applications, to account for potential faults, e.g. silent data corruptions (SDCs). In other words, algorithms themselves need to be made more robust and resilient.

This minisymposium explores HPC resilience in the context of algorithms, applications, hardware, system and runtimes. Specifically, the talks have been selected to cover topics ranging, e.g., from solvers, programming models and energy-aware computing, to approximate computing, memory vulnerability and post-Moore’s.

14:00 - 14:30
Resilience in Extreme-Scale Iterative Linear Solvers
Presenter:
Wilfried Gansterer (University of Vienna, Austria)
+ Abstract
14:30 - 15:00
Scaling and Energy Analysis for a Resilient ULFM-Based PDE Solver
Presenter:
Francesco Rizzi (Sandia National Laboratories, United States of America)
+ Abstract
15:00 - 15:30
Dynamic and Low Overhead Analysis of Memory Vulnerability
Presenter:
Marc Casas (Barcelona Supercomputing Center, Spain)
+ Abstract
15:30 - 16:00
Inexact Computing and the Interface with Resilience
Presenter:
Laura Monroe (Los Alamos National Laboratory, United States of America)
+ Abstract
16:00 - 16:30
Coffee Break, Foyer

PNL02 Sustainable Software Development and Publication Practices in the Computational Sciences

Chair:
Jack Wells (Oak Ridge National Laboratory, United States of America)

The goal of the PASC papers program is to advance the quality of formal scientific communication between the relative disciplines of computational science and engineering. The program was built from an observation that the computer science community traditionally publishes in the proceedings of major, international conferences, while the domain science community generally publishes in discipline-specific journals – and cross readership is very limited. The aim of our initiative is to build and sustain a platform that enables engagement between the computer science, applied mathematics and domain science communities, through a combination of conference participation, conference papers and post-conference journal publications. The PASC papers initiative allows authors to benefit from the interdisciplinarity and rapid dissemination of results afforded by the conference venue, as well as from the impact associated with subsequent publication in a high-quality scientific journal. To help facilitate such journal publication, PASC has recently formed collaborative partnerships with a number of scientific journals, including Computer Physics Communications (CPC), the Journal of Advances in Modeling Earth Systems (JAMES), and ACM Transactions on Mathematical Software (ACM TOMS). In this panel discussion, representatives from these journals are invited to express their thoughts regarding publication practices in the computational sciences, including the publication of software codes. We will discuss best practices for sustainable software development and address questions such as: how can we ensure that code and infrastructure will still be there in ten-plus years? How can we validate published results and guarantee reproducibility? Finally, we will describe our vision for the PASC papers initiative going forward.

Panelists:

- Thomas Schulthess (CSCS / ETH Zurich, Switzerland)
- Walter Dehnen (University of Leicester): Editor (Astronomy and Astrophysics) for Computer Physics Communications (CPC)
- Robert Pincus (University of Colorado): Editor in Chief of the Journal of Advances in Modeling Earth Systems (JAMES)
- Michael A. Heroux (Sandia National Laboratories): Associate Editor (Replicated Computational Results) for ACM Transactions on Mathematical Software (TOMS)

Chair:
Rolf Krause (Università della Svizzera italiana, Switzerland)
18:00 - 18:50
IP04 The Times They Are a-Changin' - Computational Discovery in the 21st Century
Presenter:
Nicola Marzari (EPFL, Switzerland)
+ Abstract + Biography
19:00 - 21:00
CHE-01 A Molecular Dynamics Study of the Thermodynamics of Attachment of Saccharides to Halloysite
Presenter:
Riccardo Innocenti Malini (Empa, Switzerland)
Track(s):
Chemistry & Materials
+ Abstract + Presentation
19:00 - 21:00
CHE-02 DBCSR: A Sparse Matrix Multiplication Library for Electronic Structure Codes
Presenter:
Andreas Glöss (University of Zurich, Switzerland)
Track(s):
Chemistry & Materials
+ Abstract + Presentation
19:00 - 21:00
CHE-03 First-Principles Study on Interfaces between Sulfide Electrolyte and Oxide Cathode in All-Solid-State Battery
Presenter:
Yoshitaka Tateyama (National Institute for Materials Science, Japan)
Track(s):
Chemistry & Materials
+ Abstract + Presentation
19:00 - 21:00
CHE-04 Mapping and Classifying Molecules from a High-Throughput Structural Database
Presenter:
Felix Musil (EPFL, Switzerland)
Track(s):
Chemistry & Materials
+ Abstract + Presentation
19:00 - 21:00
CHE-05 Soft-Sphere Continuum Solvation in Electronic Structure Calculations
Presenter:
Giuseppe Fisicaro (University of Basel, Switzerland)
Track(s):
Chemistry & Materials
+ Abstract + Presentation
19:00 - 21:00
CHE-06 Solvent-Aware Interfaces in Continuum Solvation
Presenter:
Oliviero Andreussi (Università della Svizzera italiana, Switzerland)
Track(s):
Chemistry & Materials
+ Abstract + Presentation
19:00 - 21:00
CLI-03 ESCAPE: Accelerating Extreme-Scale Numerical Weather Prediction
Presenter:
Willem Deconinck (ECMWF, United Kingdom)
Track(s):
Climate & Weather
+ Abstract + Presentation
19:00 - 21:00
CLI-04 GridTools: A C++ Library for Computations on Grids
Presenter:
Mauro Bianco (ETH Zurich / CSCS, Switzerland)
Track(s):
Climate & Weather
+ Abstract + Presentation
19:00 - 21:00
CLI-05 High-Performance C++ in Weather Prediction: Challenges, Achievements and Future Models
Presenter:
Pascal Spörri (ETH Zurich, Switzerland)
Track(s):
Climate & Weather
+ Abstract + Presentation
19:00 - 21:00
CLI-06 Large Scale Climate Simulations with COSMO
Presenter:
Hannes Vogt (ETH Zurich / CSCS, Switzerland)
Track(s):
Climate & Weather
+ Abstract
19:00 - 21:00
CLI-07 Reproducible Climate and Weather Simulations: an Application to the COSMO Model
Presenter:
Christophe Charpilloz (MeteoSwiss, Switzerland)
Track(s):
Climate & Weather
+ Abstract + Presentation
19:00 - 21:00
CLI-08 Use of Hybrid Supercomputing Architectures for Numerical Weather Prediction Models
Presenter:
Marco Alemanno (Centro Operativo per la Meteorologia, Italy)
Track(s):
Climate & Weather
+ Abstract + Presentation
19:00 - 21:00
CSM-01 A Novel and Efficient Compressed Algorithm for Non-Negative Matrix Factorization (NMF)
Presenter:
Gabriele Torre (FHNW, Switzerland)
Track(s):
Computer Science & Applied Mathematics
+ Abstract + Presentation
19:00 - 21:00
CSM-02 A Resilient ULFM-MPI-Based PDE Solver: Performance Scaling and Energy Analysis
Presenter:
Karla Morris (Sandia National Laboratories, United States of America)
Track(s):
Computer Science & Applied Mathematics
+ Abstract + Presentation
19:00 - 21:00
CSM-03 AV-Flow: A Software Library for Fluid Structure Interaction Problems Based on Variational Transfer Immersed Boundary Method
Presenter:
Marco Favino (Università della Svizzera italiana, Switzerland)
Track(s):
Computer Science & Applied Mathematics
+ Abstract + Presentation
19:00 - 21:00
CSM-04 Boundary Element Quadrature Schemes for Multi- and Many-Core Architectures
Presenter:
Jan Zapletal (TU Ostrava, Czech Republic)
Track(s):
Computer Science & Applied Mathematics
+ Abstract + Presentation
19:00 - 21:00
CSM-05 Energy Efficient High Performance Computing due to Application Dynamism
Presenter:
Jan Zapletal (IT4Innovations National Supercomputing Center, Czech Republic)
Track(s):
Computer Science & Applied Mathematics
+ Abstract + Presentation
19:00 - 21:00
CSM-06 Efficient On-The-Fly Operator Assembly for HPC Finite Element codes
Presenter:
Simon Bauer (Ludwig Maximilian University of Munich, Germany)
Track(s):
Computer Science & Applied Mathematics
+ Abstract + Presentation
19:00 - 21:00
CSM-07 Flexible and High-Performance Stencil Codes with GridTools4Py
Presenter:
Alberto Madonna (ETH Zurich / CSCS, Switzerland)
Track(s):
Computer Science & Applied Mathematics
+ Abstract + Presentation
19:00 - 21:00
CSM-08 Fluid Structure Interaction Simulations of the Human Heart
Presenter:
Dimosthenis Pasadakis (Università della Svizzera italiana, Switzerland)
Track(s):
Computer Science & Applied Mathematics
+ Abstract + Presentation
19:00 - 21:00
CSM-09 Model-Driven Choice of PDE Numerical Solvers
Presenter:
Andrea Arteaga (MeteoSwiss, Switzerland)
Track(s):
Computer Science & Applied Mathematics
+ Abstract + Presentation
19:00 - 21:00
CSM-10 Open Science with OpenPMD
Presenter:
Axel Huebl (Helmholtz-Zentrum Dresden-Rossendorf, Germany)
Track(s):
Computer Science & Applied Mathematics
+ Abstract + Presentation
19:00 - 21:00
CSM-11 Efficient and Portable MPI Support for Approximate Bayesian Computation
Presenter:
Lorenzo Fabbri (Università della Svizzera italiana, Switzerland)
Track(s):
Computer Science & Applied Mathematics
+ Abstract + Presentation
19:00 - 21:00
CSM-12 Parallel Immersed Boundary Simulations of Worm-Like Swimmers in the Inertial Flow Regime
Presenter:
Seyedsaeed Mirazimi (Simon Fraser University, Canada)
Track(s):
Computer Science & Applied Mathematics
+ Abstract + Presentation
19:00 - 21:00
CSM-13 Parallelization of Graph Partitioning using Metis and OpenMP
Presenter:
Rodrigo Pinto Coelho (Università della Svizzera italiana, Switzerland)
Track(s):
Computer Science & Applied Mathematics
+ Abstract + Presentation
19:00 - 21:00
CSM-14 Pruning Highway Networks
Presenter:
Dhananjay Tomar (Università della Svizzera italiana, Switzerland)
Track(s):
Computer Science & Applied Mathematics
+ Abstract + Presentation
19:00 - 21:00
CSM-15 Shifter: Experiences with High-Performance Containers
Presenter:
Kean Mariotti (ETH Zurich / CSCS, Switzerland)
Track(s):
Computer Science & Applied Mathematics
+ Abstract + Presentation
19:00 - 21:00
CSM-16 Solving Large Scale Quadratic Programming Problems with PERMON
Presenter:
Vaclav Hapla (IT4Innovations National Supercomputing Center, Czech Republic)
Track(s):
Computer Science & Applied Mathematics
+ Abstract + Presentation
19:00 - 21:00
CSM-17 Load-Balanced Partition Refinement with the Graph p-Laplacian
Presenter:
Toby Simpson (Università della Svizzera italiana, Switzerland)
Track(s):
Computer Science & Applied Mathematics
+ Abstract + Presentation
19:00 - 21:00
CSM-18 Whole Program Generation for Complex Fluid Flow Solvers
Presenter:
Sebastian Kuckuk (Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany)
Track(s):
Computer Science & Applied Mathematics
+ Abstract + Presentation
19:00 - 21:00
EMD-01 Scalable MCMC Algorithm for the Accurate Estimation of Exponential Random Graph Models
Presenter:
Maksym Byshkin (Università della Svizzera italiana, Switzerland)
Track(s):
Emerging Domains in HPC
+ Abstract
19:00 - 21:00
ENG-01 A Multiscale Model for the Simulation of Sediment Impact Erosion of Metallic Targets
Presenter:
Sebastián Leguizamón (EPFL, Switzerland)
Track(s):
Engineering
+ Abstract + Presentation
19:00 - 21:00
ENG-02 GPU-SPHEROS: A GPU-Accelerated Versatile Solver Based on the Finite Volume Particle Method
Presenter:
Siamak Alimirzazadeh (EPFL, Switzerland)
Track(s):
Engineering
+ Abstract + Presentation
19:00 - 21:00
LS-01 Aortic Valve Hemodynamics Using Variational Transfer Immersed Boundary Method
Presenter:
Barna Errol Mario Becsek (University of Bern, Switzerland)
Track(s):
Life Sciences
+ Abstract + Presentation
19:00 - 21:00
LS-02 CampaR1: An R Package for Extracting Metastable States from Time Series Data
Presenter:
Davide Garolini (University of Zurich, Switzerland)
Track(s):
Life Sciences
+ Abstract + Presentation
19:00 - 21:00
LS-03 Elucidating the Effect of Polymer Flexibility, Molecular Geometry, and Charge Neutralization on siRNA-Polycation Complexes Free Energy Landscape: A Computational Study
Presenter:
Gianvito Grasso (Scuola universitaria professionale della Svizzera italiana, Switzerland)
Track(s):
Life Sciences
+ Abstract + Presentation
19:00 - 21:00
LS-04 Flow Stability and Transition Past an Aortic Valve Using a Hybrid Multicore/Manycore Massively Parallel Navier-Stokes Solver
Presenter:
Hadi Zolfaghari (University of Bern, Switzerland)
Track(s):
Life Sciences
+ Abstract + Presentation
19:00 - 21:00
LS-05 Focused Diversification of Biomolecules for Drug Discovery by Progress Index-Guided Sampling
Presenter:
Cassiano Langini (University of Zurich, Switzerland)
Track(s):
Life Sciences
+ Abstract
19:00 - 21:00
LS-06 Funnel-Metadynamics 2.0: Graphical User Interface and Implementation in Plumed 2
Presenter:
Stefano Raniolo (Università della Svizzera italiana, Switzerland)
Track(s):
Life Sciences
+ Abstract + Presentation
19:00 - 21:00
PHY-01 Formation of Solid H2 in the ISM
Presenter:
Andreas Füglistaler (University of Geneva, Switzerland)
Track(s):
Physics
+ Abstract + Presentation
19:00 - 21:00
PHY-02 Non-Abelian Fractional Quantum Hall States and Topological Quantum Computation
Presenter:
Kiryl Pakrouski (ETH Zurich, Switzerland)
Track(s):
Physics
+ Abstract + Presentation
19:00 - 21:00
PHY-03 Numerical Method Optimization in Particle-In-Cell Gyrokinetic Plasma Code ORB5
Presenter:
Aaron Scheinberg (EPFL, Switzerland)
Track(s):
Physics
+ Abstract + Presentation
19:00 - 21:00
PHY-04 The Linear Polarization of the Solar Continuum Radiation from Numerical Simulations of the Solar Atmosphere
Presenter:
Flavio Calvo (Istituto Ricerche Solari Locarno, Switzerland)
Track(s):
Physics
+ Abstract + Presentation
09:00 - 11:00
Minisymposia and Papers Sessions
Organizer(s):
Amanda Randles (Duke University, United States of America)
Track(s):
Life Sciences, Emerging Domains in HPC

Large-scale computational simulations have become a key tool in fluid-structure interaction research. The investigation of fundamental flow principles and the interaction of blood cells with surrounding plasma have necessitated the use of high-performance computing and high-fidelity computational methods.  The need for massively parallel simulation of particle-laden fluids has driven the development of novel multiscale coupling techniques to enable high-resolution FSI modeling in complex topologies. This symposium will bring together developers of state-of-the-art multiscale and multiphysics models of hemodynamics with a particular emphasis on rheology and transport phenomena. The set of talks will showcase a range of techniques used to simulate different cell-types in an extensible and scalable manner while highlighting recent advances in computational hemodynamics. This symposium will provide a platform to identify cross-cutting challenges and opportunities for future research.

09:00 - 09:30
Blood Rheology in Complex Geometries
Presenter:
Eva Athena Economides (ETH Zurich, Switzerland)
+ Abstract
09:30 - 10:00
Numerical Simulation of a Compound Capsule in a Constricted Microchannel
Presenter:
John Gounley (Duke University, United States of America)
+ Abstract
10:00 - 10:30
Cell-Based Blood Flow Simulations, Validations, Rheology, and Transport Phenomena
Presenter:
Britt J. M. van Rooij (University of Amsterdam, Netherlands)
+ Abstract
Organizer(s):
Bernd Rinn (ETH Zurich, Switzerland),
Thomas Wüst (ETH Zurich, Switzerland)
Track(s):
Emerging Domains in HPC, Computer Science & Applied Mathematics

Precision medicine is an emerging approach for disease treatment and prevention that takes into account individual variability in genes, environment, and lifestyle for each person. Although the idea has been a part of healthcare for many years (e.g. blood transfusions), research in precision medicine has spurred a lot of interest recently due to the accessibility to large volumes of complex genomics and other biomedical datasets as well as digitized medical records and the development of novel methods and tools in data science. Sound, interoperable high-performance computing and “Big Data” analytics and management infrastructures are key to the success of research programs such as the “Swiss Personalized Health Network” (SPHN) initiative starting in 2017. These infrastructures will have to be built in collaboration and coordination between hospitals and universities to allow researchers to perform biomedical research on real patient data beyond institutional and geographical boundaries. A particular challenge of this novel infrastructure is the combination of high-performance and Big Data storage and computing resources and data management services with high data security and compliance requirements, so it can fulfill the regulations of the respective federal laws (in Switzerland, particularly the Human Research Act (Humanforschungsgesetz, HFG)) and international best practices in the field.

This minisymposium aims at bringing together experts from biomedical research, hospital and university computing and data service providers, as well as from the side of Ethical, Legal and Social Implications (ELSI), and will put the subject into the larger perspective of the upcoming Swiss national research initiative on personalized health care.

09:00 - 09:30
Precision Medicine: Paving the way for Responsible Data Access and Governance
Presenter:
Alessandro Blasimme (University of Zurich, Switzerland)
+ Abstract
09:30 - 10:00
Leonhard Med: ETH's Answer to the IT Challenges of Personalized Health Research
Presenter:
Bernd Rinn (ETH Zurich, Switzerland)
+ Abstract
10:00 - 10:30
The Swiss Personalized Health Network Data Coordination Center - Enabling Research in Personalized Health by Establishing Interoperability of Health-Related Information
Presenter:
Torsten Schwede (University of Basel, Switzerland)
+ Abstract
10:30 - 11:00
From Big Genomic Data to Molecular Insights
Presenter:
Gunnar Rätsch (ETH Zurich, Switzerland)
+ Abstract
Organizer(s):
William Sawyer (ETH Zurich / CSCS, Switzerland)
Track(s):
Emerging Domains in HPC, Computer Science & Applied Mathematics

Recent years have seen a dramatic explosion in an amount and precision of available raw data. Large amounts of measured and simulated information from all kinds of processes have been accumulated in a wide range of areas - from weather and climate research to astrophysics and neuroscience. If knowledge about such systems is present only in the form of observations or measurement data, the challenging problem of understanding the system becomes a problem of pattern recognition and model reduction in multiple dimensions. Optimization methods have manifested themselves as a central pillar for practical implementations of these data analysis problems, allowing a unified handling of a wide variety of data analysis algorithms. These include clustering methods (standard K-means, fuzzy-C-means, or Fuzzy Clustering based on Regression Models (FCRM)) as well as of the more advanced methods based on concepts from Artificial Neuronal Networks (ANNs) and nonparametric/nonstationary data analysis.

A central challenge in solving such data analysis problems lies in not imposing too many – potentially inappropriate – a priori assumptions on the available data. Also in this respect recent advances in high-performance optimisation methods can assist development of data analysis methods, through appropriate regularisation concepts and related tools. Recent progress in GPU-based high-performance computing implementations of optimisation methods enables us now to apply these techniques to ever larger problems, for instance from causality inference, image denoising, data compression, market phases identification in finance - among many others.

In this minisymposium we will discuss the current state of the art in the optimisation-driven multidimensional data analysis problems, understand their parallel programming issues – particularly in a view of the emergence of disruptive processor technology, such as clusters of Graphics Processing Units (GPUs) and other accelerators (e.g., Intel Xeon Phi) – and hear about their application to various data analysis problems from different areas. Also recent work on the development of community libraries for HPC optimisation software will be presented.

09:00 - 09:30
High Performance Implementation of FEM-H1 Approach to Clustering and Denoising of Multidimensional Time Series
Presenter:
Lukas Pospisil (Università della Svizzera italiana, Switzerland)
+ Abstract
09:30 - 10:00
The Quiet Atmosphere: Data-Driven Ideas for Bridging the Gap Between Observed and Modeled Nocturnal Turbulence.
Presenter:
Nikki Vercauteren (Freie Universität Berlin, Germany)
+ Abstract
10:00 - 10:30
On Memory, Dimension, and Atmospheric Teleconnections
Presenter:
Didier Paolo Monselesan (Commonwealth Scientific and Industrial Research Organisation, Australia)
+ Abstract
10:30 - 11:00
Deep Learning Classification of Radio Astronomy Images
Presenter:
Claudio Gheller (ETH Zurich / CSCS, Switzerland)
+ Abstract
Organizer(s):
Omar Awile (CERN, Switzerland)
Track(s):
Physics

Understanding the basic building blocks of matter is amongst the most formidable ventures humanity has undertaken. The Large Hadron Collider (LHC) and its associated experiments have allowed us to observe the particles and processes that lie at the foundation of our current understanding of the physical world. Furthermore, the LHC and other high-energy physics (HEP) facilities continue to produce data at an ever increasing rate, allowing us to peer beyond the Standard Model.

In order to filter, process, and analyze the data captured at particle detectors, the HEP community has had over the last decades an insatiable appetite for computing power. Today the LHC experiments record around 150 PB/year. The rate of interactions to be studied will increase by a factor of 100 in the next 10 to 15 years.

Computing at the LHC experiments happens mostly in two domains. In Online Computing, data captured at the detector must be processed and filtered using near-realtime, high-throughput computing software frameworks. The reconstruction of a particle collision event employs a large number of complex algorithms before the results are stored for further analysis. Offline computing deals with the physics analysis of the large data sets captured by the detector and retained by the online computing software.

In this session we take a closer look at detector simulation and data analysis in HEP experiments. Much work has been devoted in recent years to further develop the existing software frameworks to better take advantage of modern hardware architectures. This includes shared-memory parallelism, vectorization and support for coprocessors and accelerators. Furthermore, the recent advancements in the field of machine learning are finding a number of applications in high-energy physics. The presentations discuss these simulation and data-analysis frameworks in the context of high-performance computing and modern hardware architectures.

09:00 - 09:30
Computing at CERN: Challenges and Opportunities
Presenter:
Omar Awile (CERN, Switzerland)
+ Abstract + Presentation
09:30 - 10:00
HEP Realtime Analysis: Scaling Beyond Embarrassing Parallel
Presenter:
Gerhard Raven (VU University Amsterdam, Netherlands)
+ Abstract + Presentation
10:00 - 10:30
High Performance Computing Meets High Energy Physics
Presenter:
Daniel Hugo Campora Perez (University of Seville, Spain)
+ Abstract + Presentation
10:30 - 11:00
Heterogeneous Event Selection at the CMS Experiment
Presenter:
Felice Pantaleo (CERN, Switzerland)
+ Abstract + Presentation
Organizer(s):
Daniele Di Marino (Università della Svizzera italiana, Switzerland),
Vittorio Limongelli (Università della Svizzera italiana, Switzerland)
Track(s):
Life Sciences, Chemistry & Materials

The elucidation of biological processes passes through different experimental protocols that are used to dissect intricate reactions in several small pieces, thus obtaining simplified models of the entire process. Computer simulations have been used since the early 70s to study the physical and chemical properties of biomolecules and they have played in the recent years an even more relevant role in elucidating, complementing and also predicting the experimental observables. The chemical reactivity of biomolecules can be studied in-silico at different levels of accuracy and dimension using electron-based, atom-based, and multiscale models. The choice among the different approaches depends on the property of the system the scientists aim to investigate. Electron transfer reactions, ligand/protein binding, and protein-protein interactions are only a few examples of the biological processes that can be investigated through computer simulations.

The main limitations of the computational approaches are represented by:
- Timescale of the biological processes;
- Size of the system to simulate;
- Accuracy in the system’s description;

The continuous effort of the scientific community to overcome such limitations has led to important advances represented by the development of novel methods and the improved performance of the simulation codes on the modern-day computer architectures such as high-performance computing (HPC) clusters. The scientific literature shows many successful examples where simulations unravel complex biochemical problems and some of those are illustrated in the present minisymposium.

The minisymposium also represents an opportunity, particularly for young researchers, to discuss with some of the word-leading scientists in the field the state-of-the-art simulation techniques and draw a line towards the future of biomolecular simulations.

09:00 - 09:30
Computational Challenges in Photobiology and Ultrafast Spectroscopy
Presenter:
Ivan Rivalta (École normale supérieure de Lyon, France)
+ Abstract
09:30 - 10:00
High-Resolution, Integrative Modeling of Biomolecular Complexes From Fuzzy Data
Presenter:
Alexandre Bonvin (Utrecht University, Netherlands)
+ Abstract
10:00 - 10:30
A Comprehensive Description of the Homo and Heterodimerization Mechanism of the Chemokine Receptors CCR5 and CXCR4
Presenter:
Daniele Di Marino (Università della Svizzera italiana, Switzerland)
+ Abstract
10:30 - 11:00
Enhanced Sampling Approaches in Molecular Dynamics Simulations
Presenter:
Michele Parrinello (Università della Svizzera italiana, Switzerland)
+ Abstract
Organizer(s):
Matthias Bollhöfer (TU Braunschweig, Germany)
Track(s):
Computer Science & Applied Mathematics, Engineering

Computational nanoelectronics is an emerging scientific area that leads to several challenging mathematical problems such as the non-equilibrium Green's function formalism, density functional theory, covariance matrix analysis in uncertainty quantification or dynamic mean field theory, just to mention some of the topics. These aspects require high-performance computing methods just because of their mathematical and algorithmic complexity. Methods for high-performance computing include solving large-scale eigenvalue problems, the selective inversion of parts of a large-scale matrix, solving several linear systems and many more. This minisymposium will engage in research and application software development for extreme-scale numerical linear algebra targeted at classes of problems that require modern numerical methods and high computer performance.

09:00 - 09:30
Numerical Linear Algebra Techniques for Accelerating Electronic Structure Calculations
Presenter:
Chao Yang (Lawrence Berkeley National Laboratory, United States of America)
+ Abstract
09:30 - 10:00
Scalable Parallel Sparse Matrix Computations
Presenter:
Ahmed Sameh (Purdue University, United States of America)
+ Abstract
10:00 - 10:30
Densities of States: Algorithms and Applications in Linear Algebra
Presenter:
Yousef Saad (University of Minnesota, United States of America)
+ Abstract
10:30 - 11:00
Scalable Algorithms for Real-Space and Real-Time First-Principle Calculations
Presenter:
Eric Polizzi (University of Massachusetts Amherst, United States of America)
+ Abstract
Organizer(s):
Alexander Alexeev (Georgia Institute of Technology, United States of America)
Track(s):
Physics, Engineering

Active Matter is an emerging field of Physics that studies a variety of systems composed of entities that can actively consume energy to convert it into motion. Biological examples of active matter range in scale by many orders of magnitude from swarms of motile bacteria to schools of fish. They can together exhibit such phenomena as collective motion and dynamic self-organization. Understanding the basic principles governing inherently non-equilibrium active matter is a difficult challenge. One of the approaches to achieve this goal is to create synthetic systems that can reproduce behaviors typical for living matter, which can be studied to understand complex phenomena emerging in active matter. Computational modeling plays a critical role in this process enabling both better understanding of biological systems and devising synthetic systems that can be then experimentally tested. This minisymposium will discuss recent computational advances and challenges in the field of Active Matter.

09:00 - 09:30
Collective Motion of Gel-Actuated Micro-Swimmers
Presenter:
Alexander Alexeev (Georgia Institute of Technology, United States of America)
+ Abstract
09:30 - 10:00
Equilibrium Physics Breakdown Reveals the Active Nature of Red Blood Cell Flickering
Presenter:
Dmitry Fedosov (Forschungszentrum Jülich, Germany)
+ Abstract
10:00 - 10:30
Prediction of Salt-Responsive Behavior of Polyelectrolyte Micelles and Gels
Presenter:
Yaroslava G. Yingling (North Carolina State University, United States of America)
+ Abstract
10:30 - 11:00
Plastic Attraction and Other Strange Phenomena in Active Matter
Presenter:
Alfredo Alexander-Katz (Massachusetts Institute of Technology, United States of America)
+ Abstract
Organizer(s):
Clement Surville (University of Zurich, Switzerland)
Track(s):
Physics

Astrophysical flows are a great challenge for today's simulation community. Astrophysical disks in particular exhibit large Mach numbers, turbulence, ionization, shock waves and are subject to many nonlinear instabilities. Compared to geophysical and laboratory flows, a lot of improvements have been made during the last decades to the numerical methods used in this field. Finite volume methods, mesh refinement, moving mesh and other recent developments such as Lagrangian methods that improve on conventional particle-based methods (SPH) are used with efficiency; today's large supercomputers also allow one to run long term 3D calculations that were unfeasible ten years ago. The results of these simulations give a new picture of the dynamics of these disks, and contribute significantly to the astrophysics. Planet formation models are improved, disk observations can be predicted and understood, new instabilities are discovered, and much more will be done in the coming years.

We present in this minisymposium some recent developments in the field, presented by code developers and expert users. We will focus in particular on: (i) different state-of-the-art solvers for the fluid equations of a single fluid component and (ii) methods that handle the solution for coupled two or more fluids to model dust species and gas simultaneously. The efforts and interest in the field make it now possible to perform astrophysical disk simulations on almost 100 thousand CPU cores as well as on GPU clusters. It is becoming a new area of HPC in astrophysics.

09:00 - 09:30
The Code RoSSBi: A Multi-Fluid Method for Dust Dynamics in Protoplanetary Disks
Presenter:
Clement Surville (University of Zurich, Switzerland)
+ Abstract
09:30 - 10:00
A Restrictive Refinement Strategy for PPM Applied to Disk Simulations
Presenter:
Paul R. Woodward (University of Minnesota, United States of America)
+ Abstract
10:00 - 10:30
Hydro-Dynamic Stability of Radially and Vertically Stratified Disks
Presenter:
Hubert Klahr (Max Planck Institute for Astronomy, Germany)
+ Abstract
10:30 - 11:00
High Order Numerical Schemes for Computational Fluid Dynamics in Astrophysics
Presenter:
Maria Veiga (University of Zurich, Switzerland)
+ Abstract
Organizer(s):
Dominik Goeddeke (University of Stuttgart, Germany),
Michael A. Heroux (Sandia National Laboratories, United States of America)
Track(s):
Computer Science & Applied Mathematics

Current and future computing systems are becoming increasingly unreliable and unpredictable: It is expected that at scale, errors will become the rule rather than the exception, and already today, severe performance fluctuations can be observed. In addition, the heterogeneity of the hardware mandates robustness, asynchrony, and communication-avoiding to be built directly into the methods, in order to achieve close to peak performance. Consequently, reliability and robustness must be built directly into scientific computing applications and numerical algorithms. In this minisymposium, we discuss the state of the art in fault-tolerant, communication-avoiding and asynchronous methods, focusing on, but not necessarily limiting the scope to, iterative solvers. The invited talks emphasize proven or provable novel algorithms and mathematical techniques beyond redundancy. Recent developments towards interactions with middleware and operating systems, as well as analytical or parameterised models, will also be highlighted. This workshop, a synthesis of the state of the field, will be accessible to non-experts and experts alike.

09:00 - 09:30
Resilient Constructs for Parallel Applications
Presenter:
George Bosilca (University of Tennessee, United States of America)
+ Abstract
09:30 - 10:00
Fault Detection and Mitigation for Multigrid Solvers
Presenter:
Dominik Göddeke (University of Stuttgart, Germany)
+ Abstract
10:00 - 10:30
Urgency and Progress in Application Level Fault Tolerance
Presenter:
Michael A. Heroux (Sandia National Laboratories, United States of America)
+ Abstract
10:30 - 11:00
Resilience Enhancement Using Discrete A Priori Bounds for the Detection of Faulty PDE Solutions
Presenter:
Paul Mycek (CERFACS, France)
+ Abstract + Presentation
11:00 - 11:30
Coffee Break, Foyer
11:30 - 12:30
CSCS Update, Room A
12:30 - 13:30
Lunch, Foyer
13:30 - 15:30
Minisymposia and Papers Sessions

First FoMICS Student Prize in Computational Science and Engineering

Chair:
Rolf Krause (Università della Svizzera italiana, Switzerland)

The Swiss Graduate Program FoMICS "Foundations in Mathematics and Informatics for Computer Simulations in Science and Engineering", led by the Institute of Computational Science (ICS) at the Università della Svizzera italiana in Lugano, is pleased to announce the first FoMICS prize for PhD students. In this session, selected students will have the opportunity to present their doctoral research in a short talk (10-15 min). The prize will be awarded based on the quality of the results and the ability to communicate them to the audience. The award will be presented at the closing session of PASC17.

Organizer(s):
Amanda Randles (Duke University, United States of America)
Track(s):
Emerging Domains in HPC, Life Sciences

The predictive power of fluid-structure interaction methods together with the growing computational power of massively parallel supercomputing architectures have led to key scientific and technological advances in patient-specific hemodynamic modeling. The increase in computational power alongside improved numerical techniques open up the possibility to simulate and predict behavior from the cellular to systemic level over larger temporal domains. The aim of the present minisymposium is to gather experts in the computational hemodynamics community to discuss the challenges in algorithmic development as well as porting, scaling, and optimizing large-scale blood flow models for leadership class systems. The presentations will focus on both the discussion of findings and the lessons learned regarding effective use of next generation architectures for the advancement of such biomedical applications.

13:30 - 14:00
Measuring the Ankle-Brachial Index with Massively Parallel Simulations
Presenter:
Amanda Randles (Duke University, United States of America)
+ Abstract
14:00 - 14:30
Performance Analysis of Parallel Lattice Boltzmann Methods on Complex Domains
Presenter:
Christian Godenschwager (Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany)
+ Abstract
14:30 - 15:00
Numerical Lab-On-a-Chip: High-Throughput Simulation and Optimization of Microfluidic Devices
Presenter:
Dmitry Alexeev (ETH Zurich, Switzerland)
+ Abstract
15:00 - 15:30
Fully Coupled Fluid-Electro-Mechanical Cardiac Problems
Presenter:
Jazmin Aguado (Barcelona Supercomputing Center, Spain)
+ Abstract
Organizer(s):
Joachim Biercamp (German Climate Computing Centre, Germany)
Track(s):
Climate & Weather

Numerical weather prediction and climate modeling are highly dependent on the available computing power in terms of the achievable spatial resolution, the number of members run in ensemble simulations as well as the completeness of physical processes that can be represented. Both domains are also highly dependent on the ability to produce, store and analyze large amounts of simulated data, often with time constraints from operational schedules or international coordinated experiments. The ever increasing complexity of both numerical models and high-performance computing (HPC) systems has led to the situation that today, one major limiting factor is no longer the theoretical peak performance of available HPC systems, but the relatively low sustained efficiency that can be obtained with complex numerical models of the Earth system.

The differences in model complexity, as well as the temporal and spatial scales that were historically characteristic for climate and weather modeling, are vanishing since both applications ultimately require complex Earth system modeling capabilities which resolve the same physical process detail across atmosphere, ocean, cryosphere and biosphere. With increasing compute power and data handling needs, both communities must exploit synergies to tackle common scientific and technical challenges.

This minisymposium will focus on joint climate and weather community engagement in cutting edge high-resolution modeling for research and service provision.

13:30 - 14:00
ESiWACE: The Center of Excellence in Simulation of Climate and Weather in Europe
Presenter:
Joachim Biercamp (German Climate Computing Centre, Germany)
+ Abstract
14:00 - 14:30
High Resolution Simulations with the Weather Forecast Model ICON: Teaming Up Modeling and Measuring Communities in Climate and Weather Research
Presenter:
Daniel Klocke (DWD, Germany)
+ Abstract
14:30 - 15:00
IS-ENES Coupling Technology Benchmarks
Presenter:
Sophie Valcke (CERFACS, France)
+ Abstract
15:00 - 15:30
How to Escape from the Data Avalanche of High-Resolution Climate Models?
Presenter:
Christoph Schär (ETH Zurich, Switzerland)
+ Abstract
Organizer(s):
Omar Awile (CERN, Switzerland)
Track(s):
Physics

Understanding the basic building blocks of matter is amongst the most formidable ventures humanity has undertaken. The Large Hadron Collider (LHC) and its associated experiments have allowed us to observe the particles and processes that lie at the foundation of our current understanding of the physical world. Furthermore, the LHC and other high-energy physics (HEP) facilities continue to produce data at an ever increasing rate, allowing us to peer beyond the Standard Model.

In order to filter, process, and analyze the data captured at particle detectors, the HEP community has had over the last decades an insatiable appetite for computing power. Today the LHC experiments record around 150 PB/year. The rate of interactions to be studied will increase by a factor of 100 in the next 10 to 15 years.

Computing at the LHC experiments happens mostly in two domains. In Online Computing, data captured at the detector must be processed and filtered using near-realtime, high-throughput computing software frameworks. The reconstruction of a particle collision event employs a large number of complex algorithms before the results are stored for further analysis. Offline computing deals with the physics analysis of the large data sets captured by the detector and retained by the online computing software.

In this session we take a closer look at detector simulation and data analysis in HEP experiments. Much work has been devoted in recent years to further develop the existing software frameworks to better take advantage of modern hardware architectures. This includes shared-memory parallelism, vectorization and support for coprocessors and accelerators. Furthermore, the recent advancements in the field of machine learning are finding a number of applications in high-energy physics. The presentations discuss these simulation and data-analysis frameworks in the context of high-performance computing and modern hardware architectures.

13:30 - 14:00
GeantV: Designing the Future of Particle Transport Simulation for HEP
Presenter:
Sofia Vallecorsa (CERN, Switzerland)
+ Abstract + Presentation
14:00 - 14:30
Stitched the Multi-Threaded CMS Framework: Strategy and Performance on HPC Platforms
Presenter:
Vincenzo Innocente (CERN, Switzerland)
+ Abstract + Presentation
Organizer(s):
Martin Frank (RWTH Aachen University, Germany)
Track(s):
Emerging Domains in HPC, Life Sciences

Fluorescence-mediated tomography (FMT) is an optical imaging technique to access the three-dimensional distribution of a fluorescent in a few centimeters depth. The main application is for preclinical drug development, but the technology is also potentially applicable for human hand and breast imaging. In FMT, a light source shines onto the object and the light propagates into the object, where it scatters and is absorbed. It triggers a fluorescent material, which emits light at a different wavelength. The remaining light and the emitted light can be measured on the other side of the object. From the measurements, the fluorescence distribution should be reconstructed as accurately as possible. This is not only a relevant medical problem but also leads to a current topic of mathematical research. One important aspect is an accurate optical model containing information about the shape and heterogeneous scattering and absorption maps. Mathematically, the process can be described by the Boltzmann transport equation, which itself is expensive to solve. Furthermore, due to the high scattering in the near-infrared wavelengths, the inverse problem of fluorescence reconstruction is a mathematically and computationally challenging problem, requiring HPC. 

The purpose of this minisymposium is to report on the continuing progress on mathematical and computational methods that aim at an improvement of the quality of the FMT image reconstruction. It brings together researchers from applied mathematics, computational science, as well as industry to discuss their work and exchange ideas.

13:30 - 14:00
Fluorescence-Mediated Tomography in Pharmacology Research
Presenter:
Felix Gremse (RWTH Aachen University, Germany)
+ Abstract
14:00 - 14:30
Improving Resolution and Sensitivity of Fluorescence-Mediated Tomography using GPU-Accelerated Image Reconstruction
Presenter:
Felix Gremse (RWTH Aachen University, Germany)
+ Abstract
14:30 - 15:00
Accelerated Image Reconstruction Algorithms in Fluorescence Optical Tomography
Presenter:
Herbert Egger (TU Darmstadt, Germany)
+ Abstract
15:00 - 15:30
On Inexact Geometry Treatment in Fluorescence Molecular Tomography
Presenter:
Matthias Schlottbom (University of Twente, Netherlands)
+ Abstract
Organizer(s):
Dominik Goeddeke (University of Stuttgart, Germany),
Michael A. Heroux (Sandia National Laboratories, United States of America)
Track(s):
Computer Science & Applied Mathematics

Current and future computing systems are becoming increasingly unreliable and unpredictable: It is expected that at scale, errors will become the rule rather than the exception, and already today, severe performance fluctuations can be observed. In addition, the heterogeneity of the hardware mandates robustness, asynchrony, and communication-avoiding to be built directly into the methods, in order to achieve close to peak performance. Consequently, reliability and robustness must be built directly into scientific computing applications and numerical algorithms. In this minisymposium, we discuss the state of the art in fault-tolerant, communication-avoiding and asynchronous methods, focusing on, but not necessarily limiting the scope to, iterative solvers. The invited talks emphasize proven or provable novel algorithms and mathematical techniques beyond redundancy. Recent developments towards interactions with middleware and operating systems, as well as analytical or parameterised models, will also be highlighted. This workshop, a synthesis of the state of the field, will be accessible to non-experts and experts alike.

13:30 - 14:00
Fault-Tolerant Parallel-In-Time Integration with PFASST
Presenter:
Robert Speck (Forschungszentrum Jülich, Germany)
+ Abstract
14:00 - 14:30
Reducing Data Movement by Exploiting Computation Dependencies
Presenter:
Marc Casas (Barcelona Supercomputing Center, Spain)
+ Abstract
14:30 - 15:00
Fault Tolerance with High Performance for Matrix Multiplication
Presenter:
Noam Birnbaum (The Hebrew University of Jerusalem, Israel)
+ Abstract
15:00 - 15:30
Enlarged Krylov Subspace Methods for Reducing Communication
Presenter:
Olivier Tissot (INRIA, France)
+ Abstract
Organizer(s):
Yoshitaka Tateyama (National Institute for Materials Science, Japan)
Track(s):
Chemistry & Materials

Although it seems difficult for any hardware to reach real exaflops at the moment, developments of first-principles calculation codes in this direction are still very important in the community of materials science, chemistry, and condensed matter physics. The Swiss National Computing Centre, which has the world No.8 Piz Daint (9.7 PFlops), announced that it will adopt NVIDIA Pascal GPU in conjunction with Intel Haswell CPU. On the other hand, in Japan, the Joint Center for Advanced High-Performance Computing and FUJITSU launched world No.6 Oakforest-PACS (13.5 PFlops), consisting of Intel Xeon Phi, and RIKEN, the centre of world No.7 K-computer (10.5 PFlops) by FUJITSU, announced that it will adopt ARM processors for the Post-K exascale supercomputer. Namely, there are two different directions: GPU-CPU and many-cores. In this trend, first-principle (mainly DFT) calculations codes need to be adjusted to either or both in the coding stage or the methodology. This minisymposium is designed to bring developers facing these issues in Switzerland and Japan together, and discuss the future directions with an audience from domain science and computer science.

13:30 - 14:00
Large Scale Electronic Structure Calculations with CP2K
Presenter:
Juerg Hutter (University of Zurich, Switzerland)
+ Abstract
14:00 - 14:30
NTChem: A High-performance Software Package for Quantum Chemistry Simulation
Presenter:
Takahito Nakajima (RIKEN, Japan)
+ Abstract
14:30 - 15:00
Large-Scale Static and Dynamic Density-Functional Calculations in a Real-Space Scheme: New Physical Properties of Two-Dimensional Systems
Presenter:
Atsushi Oshiyama (University of Tokyo, Japan)
+ Abstract
15:00 - 15:30
BigDFT: Accurate and Efficient Density Functional Calculations on Peta Scale Computers
Presenter:
Stefan Goedecker (University of Basel, Switzerland)
+ Abstract
Organizer(s):
Paul R. Woodward (University of Minnesota, United States of America)
Track(s):
Physics, Computer Science & Applied Mathematics

In the early days of HPC, a debate raged between the Lagrangian and Eulerian approaches to hydrodynamic simulations. In 1D, there was no contest: Lagrangian hydrodynamics was the clear winner. This approach is still seen today in stellar evolution codes, where the immense time spans that must be covered forces us to traverse at least the great bulk of that time span in 1D. In the 1960s and 70s, the debate moved to 2D. Lagrangian grids tangle, which gave rise to various mitigating techniques, such as slide lines and “free Lagrange” or “continuous rezoning” approaches. Eulerian grids cannot easily capture critical flow features, such as multimaterial surfaces, which gave rise to mitigating techniques, such as volume-of-fluid and front tracking approaches. In the 1980s, the seeds of modern SPH (smooth particle hydrodynamics) and AMR (adaptive mesh refinement) were planted in the Lagrangian and Eulerian camps, respectively. The simulation community then moved to 3D, where these new variants now play very important roles. In this minisymposium, we will explore the present state of the art on both the Lagrangian and Eulerian sides. For example, can Riemann problems play a similar role for improving shock capturing in Lagrangian codes that they play in Eulerian ones? Or viewing the debate from another angle, how dynamic can an AMR grid actually be at an affordable overhead cost? A potential outcome of the discussion in this minisymposium could be a small selection of test problems that could, in principle, be addressed by both approaches. Problems along the lines of “I bet you can’t do this” would be fun, but only if they actually could be set up in the opposite approach without huge investments of time and effort. Problems that have been addressed in both ways in recent years that would be of interest to hear about in their present states of the art include: the common envelope star problem, formation of galaxies or other large structures in cosmology, the star formation and fragmentation, or planet formation, problem, or any of a variety of problems involving unstable multifluid interfaces and the development of turbulence. Contributions showing the state of the art in these domains will be presented from the Lagrangian and Eulerian perspectives. Contributors will demonstrate the potential for their favorite problems to become selected tests that could be used by the community in advancing this debate, and with that the level of technical proficiency on each side. This minisymposium is viewed as a mechanism to begin this debate afresh from a modern perspective. The selected presenters will engage the attendees, particularly with a goal of establishing a forum and a set of agreed upon test problems.

13:30 - 14:00
Using Constrained, High-Order Subgrid Structure to Address Interaction of Turbulence with Unstable Multi-Fluid Interfaces in Uniform-Grid Eulerian Simulations
Presenter:
Paul R. Woodward (University of Minnesota, United States of America)
+ Abstract
14:00 - 14:30
Smooth Particle Hydrodynamics: Methods and Astrophysical Applications
Presenter:
Thomas Quinn (University of Washington, United States of America)
+ Abstract
14:30 - 15:00
Arbitrary Lagrangian-Eulerian Hydrodynamics: Recent Development and Test Problems
Presenter:
Gabriel Rockefeller (Los Alamos National Laboratory, United States of America)
+ Abstract + Presentation
15:30 - 16:00
Coffee Break, Foyer
Chair:
Thomas Schulthess (ETH Zurich / CSCS, Switzerland)
16:00 - 16:50
IP05 Towards Quantum High Performance Computing
Presenter:
Matthias Troyer (Microsoft Research, United States of America)
+ Abstract + Biography
16:50 - 17:00
Award Ceremony, Room A
17:00 - 17:10
Closing Session, Room A