Linear elastodynamics is a physically and mathematically well-understood problem, and numerical techniques have been successfully developed and applied for decades. However, the exuberant scale of the problem reaching 10^12 degrees of freedom, as well as the multi-scale complexity of the underlying parameter space render seismic applications as some of the most challenging HPC problems in the physical sciences. This is particularly the case for the inverse problem in mapping observations to parameters using millions of observations. Technical bottlenecks arise on many ends: meshing complex geological 3D structures, scalability, adaptivity to emerging architectures, data infrastructure for millions of simulations, provenance, code usability.

In this minisymposium, we will hear about a diverse range of topics covering state-of-the-art wave propagation at scales between the globe and the human body, each with specifically adaptive techniques and their HPC solutions. Many of the talks will be based on different variants of the spectral-element technique, which has been dominating large-scale seismology for the past two decades. Novel variants include its scalable adaptation to tetrahedra, a new flexible implementation in C++, coupling to pseudo-spectral approaches, and scaling on emerging architectures. Other techniques will be the discontinuous Galerkin method for dynamic earthquake rupture, and an immersive approach to couple numerical modeling with wave tank experiments on FPGAs.

Many of the talks will be driven by requirements from specific applications such as nonlinear earthquake rupture dynamics in a complex 3D geological fault system, multiscale geological structures at scales reaching the deep Earth interior, wave tank experiments, seismic tomography at large and industrial scales, and an application to breast cancer detection using ultrasound.

In the discussion, we will strive to identify common bottlenecks, ideas to adapt to emerging architectures, any possible basic set of common algorithmic solutions, and discuss where one could move forward to consolidate different approaches based on any commonalities such as meshing, MPI-approaches, data infrastructures, or numerical solvers.

# Schedule

**Registration**, Foyer

**Welcome to the Conference**, Room A

**Words from the Conference Chairs**, Room A

**Lunch**, Foyer

**Minisymposia and Papers Sessions**

In this series of two minisymposia, a special focus lies on providing a platform for exchanging ideas about scalable, memory-efficient, fast and resilient solving techniques. These characteristics are crucial for science and engineering-driven applications making use of exascale computing like geophysics, astrophysics, aerodynamics, etc. Algorithms in high-performance computing require a rethinking of the standard approaches to ensure on the one hand the full usage of the future computing power, and on the other hand energy-efficiency. A careful implementation of all performance-relevant parts and an intelligent combination with external libraries is fundamental for exascale computation. Multigrid and domain decomposition play an important role in many scientific applications and have often developed ideas separately using the immense compute power of supercomputers. The minisymposia address both communities and focus on exchanging current research progress related to exascale enabled solving techniques.

The language of linear algebra is ubiquitous across scientific and engineering disciplines and is used to describe phenomena and algorithms alike. The translation of linear algebra expressions to high-performance code is a surprisingly challenging problem, requiring knowledge in high-performance computing, compilers, and numerical linear algebra. Typically, the user is offered two contrasting alternatives: either high-level languages (e.g. Matlab), which enable fast prototyping at the expense of performance, or low-level languages (C and Fortran), which allow for highly efficient solutions at the expense of extremely long development cycles. This workshop brings together domain specialists who believe that productivity and high-performance need not be mutually exclusive.

At the beginning of the current century we are facing massive challenges due to the increasing global demand for energy, focused on two major issues. On one hand, conventional fossil fuel resources such as oil, natural gas, and coal are limited and dwindling. On the other hand, the emissions due to combustion of fossil fuels evidently impact the chemical composition of our atmosphere, leading to adverse effects on the climate and environment. These inevitable global challenges imminently demand for technological advances in energy conversion, storage and transport. The search for novel materials for energy applications has recently become an extremely active area of research worldwide to address these issues, with efforts in chemistry, solid-state physics, and materials science, via the Materials Genome Initiative in the US and related initiatives in other countries. In this search, computational tools are being actively developed not only to explore uncharted chemical space of new materials, but also to understand the complex interplay of materials properties with the underlying crystal structures.

One particular class of materials for energy applications are thermoelectric materials, which are required to drive thermoelectric generators that allow for a reliable, clean, emission-free conversion of (waste) heat into electricity. Until the mid-1990s, thermoelectrics had been considered inefficient and not economically relevant, but with enhanced structural engineering and intense research on novel complex materials the interest for thermoelectric materials has been recently revived. The efficiency of a thermoelectric material is governed by the so-called figure of merit zT, which is maximized by increasing the thermopower and electrical conductivity while reducing the thermal conductivity. These materials properties are however strongly interrelated, e.g. in most materials the thermal and electrical conductivities correlate with each other through the Wiedemann-Franz-law. Hence, the search for a material with a maximal zT poses a non-trivial materials design challenge.

This symposium aims at bringing together scientists to share their computational efforts in thermoelectric materials development. An accurate description of the bulk lattice thermal transport, which is governed by phonon-phonon interactions, demands advanced simulation techniques and large HPC infrastructures. Solving the Boltzmann phonon transport equation requires the knowledge of the anharmonic energy contributions which give rise to phonon scattering, posing one of the computationally most demanding aspects in modeling thermal resistivity. Density functional perturbation theory and finite difference methods are currently state-of-the-art approaches, but remain computationally highly demanding. Furthermore, the interactions of phonons with electrons become increasingly important at elevated temperatures and have recently been the focus of research in thermoelectric materials. Finally, methods for modeling of transport properties on a large scale are required for the discovery of new materials with improved thermoelectric properties. The focus of this symposium will be on novel approaches, amongst others based on methods from machine learning, signal processing, and high-throughput techniques, in modeling transport properties to advance in silico discovery of thermoelectric materials.

The simulation of turbulent flows in engineering applications is often characterized by high Reynolds number, physical processes that depend on length scales that are too small to be resolved, and complex geometry. The advances in computing hardware notwithstanding, it is becoming clear that large eddy simulation (LES) of such flows, where the resolved/filtered scales of motion are evolved, and the unresolved scaled are modeled, is still intractable. The large number of grid points required for a well-resolved simulation of the flow physics places a greater need for modeling the unresolved scales and evolving the resolved scales in a manner whereby the dissipative and dispersive errors are minimized. Given that in a LES, the errors in the solution are a combination of filtering errors that are characterized by the filter width (delta), the numerical order of accuracy, which is characterized by the cell width (h), and the sub-grid scale (SGS) modeling error, an assessment of the resulting “solution” is complicated by the fact that it is difficult to isolate the effects of each of these contributing factors. This area of research offers opportunities to quantify the tradeoffs between the computational advantages offered by higher-order numerical methods and the turbulence resolving capabilities of these numerical methods in the context of LES of high Reynolds number flows. On emerging exascale computing platforms, where the available power is capped at 20MW, the architecture is beginning to be characterized by processors with a large number of cores that run at dynamic clock speeds that decrease as the cores begin to overheat, deep memory hierarchies with less on-chip memory, and multiple pathways to parallelizing algorithms, ranging from coarse-grained parallelism (MPI) to fine-grained parallelism (threads, vectorization). The reduced on-chip memory requires time consuming operations to fetch data from external (off-chip) memory to local cache in order to make computations possible! On such platforms, the traditional measure of assessing the efficiency of parallel codes, via FLOPs alone, is being replaced by the more meaningful arithmetic intensity (AI), which is defined as the ratio of FLOPs to the number of load-store operations. For turbulence simulations, it would appear that higher-order numerical methods that are less memory bandwidth limited may offer an obvious advantage. However, despite a veritable body of literature that documents the advantages of higher-order methods when applied to problems that are ideal – in the sense that the cases to which they have been applied are those where one has a fairly high degree of control on the inflow and boundary conditions, and the geometry of the computational domain – it is not quite clear if these methods could serve as the gold standard when one applies higher-order methods to drive the compute engine in a predictive flow simulation tool, with considerable uncertainties in the flow conditions and complexities in the geometry. The focus of this minisymposium, therefore, is a presentation of higher-order numerical discretizations, their impacts on the resolvable turbulent flow physics, and the scaling and parallel performance of higher-order discretizations on emerging computing hardware.

The recent observation of different tiers of TOP 500 show a growing impact of computing systems based on architectural features such as: more complex memory hierarchies incorporating fast yet limited memory per core, addition of large capacity non-volatile memory, substantial increase in cores per shared-memory "island", and the closer and closer integration of high-performance interconnects with the CPU and memory subsystem. Application codes that want to take advantage of such systems need to reshape to achieve good levels of performance. At the same time, it is important to ensure that codes can be maintained and developed further without undue complexity imposed by the execution systems. Moreover, significant advances in the field of reconfigurable computing and their system integration have generated major interest: new generations provide highly efficient FP units, and fast cache-coherent interconnects to CPUs were announced. On the SW side, the momentum around OpenCL is lowering the entry barriers. Tighter Integration of FPGAs and CPUs will allow traditional FPGA workloads to get closer to the more "general purpose" server and require less specialized custom boards.

These emerging computing architectures generate innovative developments in computer science and applied mathematics, which in return, enable new capacities for scientific applications. The minisymposium will be an excellent opportunity to share some of the most recent, leading edge advances for scientific applications enabled by these trends. New algorithmic developments, and how to code them on the emerging architectures will be at the heart of the session: new PDE solvers for QCD, efficient implementation of Fast Multiple Methods applied to biomolecular simulations, how to use the cache-aware-roofline model to guide the performance optimization, and high energy physics workloads projecting efficient usage of nodes combining Xeon and FPGA.

The applied life sciences are of huge importance, both economically (the pharmaceutical sector alone supplies something like 30% of the exports of Switzerland), and in terms of tackling societal challenges such as aging populations, the increasing burden of chronic diseases, and spiraling costs of health care. The challenges in the industry are many and varied, and include the need for a much better understanding of why certain promising drugs fail trials, how best to identify and model sub-groups in patient populations to deliver on the promises of precision medicine, and how to integrate information and models ranging from the molecular level up to patient-worn sensors.

Due to their importance, the applied life sciences should be supported with the best possible tools to tackle these challenges. Whilst the use of computing is reasonably well established in this sector, the use of High-Performance Computing (HPC) is much less well established when compared to other sectors such as engineering or physics. This is indeed unfortunate given the potential of in silico experiments and analysis of complex data to advance the state of the art in the field and deliver concrete benefits to society.

In this minisymposium we will explore various approaches to the use of computing for the applied life sciences, ranging from lower-level systems modeling, through the application of large-scale machine learning to high volume screens in drug discovery, to analysis of genomic information. Each of these has different modeling and scaling challenges and has had varying levels of success in the application of HPC to the problem. We will have speakers from industry, academia and industrial-academic collaborations alike, giving varying perspectives on the state of the art and the potential for the application of HPC.

The specific areas covered by the speakers will be the following:

- HPC implementation of multi-target compound activity prediction in chemogenomics based on state of the art large scale machine learning techniques

- Challenges in data handling and computation for the analysis of DNA for personalized healthcare

- Systems biology and HPC

**Coffee Break**, Foyer

**Minisymposia and Papers Sessions**

Linear elastodynamics is a physically and mathematically well-understood problem, and numerical techniques have been successfully developed and applied for decades. However, the exuberant scale of the problem reaching 10^12 degrees of freedom, as well as the multi-scale complexity of the underlying parameter space render seismic applications as some of the most challenging HPC problems in the physical sciences. This is particularly the case for the inverse problem in mapping observations to parameters using millions of observations. Technical bottlenecks arise on many ends: meshing complex geological 3D structures, scalability, adaptivity to emerging architectures, data infrastructure for millions of simulations, provenance, code usability.

In this minisymposium, we will hear about a diverse range of topics covering state-of-the-art wave propagation at scales between the globe and the human body, each with specifically adaptive techniques and their HPC solutions. Many of the talks will be based on different variants of the spectral-element technique, which has been dominating large-scale seismology for the past two decades. Novel variants include its scalable adaptation to tetrahedra, a new flexible implementation in C++, coupling to pseudo-spectral approaches, and scaling on emerging architectures. Other techniques will be the discontinuous Galerkin method for dynamic earthquake rupture, and an immersive approach to couple numerical modeling with wave tank experiments on FPGAs.

Many of the talks will be driven by requirements from specific applications such as nonlinear earthquake rupture dynamics in a complex 3D geological fault system, multiscale geological structures at scales reaching the deep Earth interior, wave tank experiments, seismic tomography at large and industrial scales, and an application to breast cancer detection using ultrasound.

In the discussion, we will strive to identify common bottlenecks, ideas to adapt to emerging architectures, any possible basic set of common algorithmic solutions, and discuss where one could move forward to consolidate different approaches based on any commonalities such as meshing, MPI-approaches, data infrastructures, or numerical solvers.

This minisymposium will focus on computational approaches to simulate tissue dynamics. Recent advances in algorithms, hardware, and microscopy enable more sophisticated and realistic simulations of tissue dynamics. A variety of simulation frameworks are being developed to capture different aspects of tissue dynamics. Each method has its advantages and disadvantages in terms of resolution, realism, and computational efficiency. This minisymposium will present a variety of state-of-the-art methods and their applications in biology.

The four talks will present interface-capturing methods such as the phase-field method, vertex models, as well as LBIBCell, a simulation framework that permits tissue simulations at cellular resolution by combining the Lattice-Boltzmann method for fluid and reaction dynamics with an immersed boundary condition to capture the elastic properties of tissues and to permit fluid-structure interactions.

The minisymposium will thereby offer an overview on state-of-the-art approaches to tissue simulations, and highlight recent advances and remaining challenges.

The complexities and nature of fluid flows imply that the resources needed to computationally model problems of industrial and academic relevance are virtually unbounded. CFD simulations, therefore, are a natural driver for exascale computing and have the potential for substantial societal impact, like reduced energy consumption, alternative sources of energy, improved health care, and improved climate models. Extreme scale CFD poses several cross-disciplinary challenges e.g. algorithmic issues in scalable solver design, handling of extreme sized data with compression and in-situ analysis, resilience and energy awareness in both hardware and algorithm design. The wide range of topics makes exascale CFD relevant to a wider HPC audience, extending outside the traditional fluid dynamics community.

This minisymposium will be organized by the EU funded Horizon 2020 project ExaFLOW together with leading CFD experts from industry and will feature presentations showcasing their work on addressing key algorithmic challenges in CFD in order to facilitate simulations at exascale, e.g. accurate and scalable solvers, strategies to ensure fault tolerance and resilience. This session aims at bringing together the CFD community as a whole, from HPC experts to domain scientists, to discuss current and future challenges towards exascale fluid dynamics simulations and to facilitate international collaboration.

Weather and climate prediction centers face enormous challenges due to the rising cost of energy associated with running complex high-resolution forecast models on more and more processors and the likelihood that Moore's law will soon reach its limit, with microprocessor feature density (and performance) no longer doubling every two years. But the biggest challenge to state-of-the-art computational services arises from its own software productivity shortfall. The application software at the heart of all prediction services throughout Europe is ill-equipped to efficiently adapt to the rapidly evolving heterogeneous hardware provided by the supercomputing industry. The solution is not to reduce the stringent requirements for Earth-system prediction but to combine scientific and computer-science expertise for defining and co-designing the necessary steps towards affordable, exascale high-performance simulations of weather and climate. The Energy-efficient and Scalable Algorithms for weather Prediction and Exascale (ESCAPE) projects brings together a consortium of weather prediction centres operating at global as well as European regional scales, university institutes performing research on numerical methods and novel code optimization techniques, HPC centres with vast experience in scalable code development and diverse processor technologies, large HPC hardware vendor companies operating market leading systems, as well as a European start-up SME with novel and emerging optical processor technologies to address the challenge of extreme-scale, energy-efficient high-performance computing. Key objectives of ESCAPE are to (i) define fundamental algorithm building blocks (“weather & climate dwarfs”) to foster trans-disciplinary research and innovation and to co-design, advance, benchmark and efficiently run the next generation of NWP and climate models on energy-efficient, heterogeneous HPC architectures, to (ii) diagnose and classify weather and climate dwarfs on different HPC architectures, and to (iii) combine frontier research on algorithm development and extreme-scale, high-performance computing applications with novel hardware technology, to create a flexible and sustainable weather and climate prediction system. This minisymposium will present the current state of prediction model component developments of weather and climate dwarfs within and beyond ESCAPE, and the implications on performance and employed programming models. This session acts in close collaboration with the minisymposium on ‘Programming models and abstractions for weather and climate models: Today and in the futures’.

Isogeometric Analysis (IgA) is a recent but well established method for the analysis of problems governed by differential equations. Its goal is to reduce the gap between the worlds of Finite Element Analysis (FEA) and Computer Aided Design (CAD). One of the key ideas in IgA is to use a common spline representation model for the design as well as for the analysis, providing a true design-through-analysis methodology.

The IgA approach has been proved to be superior with respect to conventional FEA in various engineering application areas, including structural mechanics, electromagnetism, fluid-structure interaction. The keystones of this success are the many outstanding properties of the considered spline spaces and the associated B-spline basis. Spline representations allow for an efficient (geometric) manipulation, a high approximation power with respect to their degrees of freedom, appealing spectral properties, fast numerical linear algebra methods depending on spectral properties and/or tensor techniques.

The minisymposium will address the most recent research directions and results related to

1) analysis of spectral properties in concrete applications

2) fast numerical linear algebra methods in connection with Bsplines, NURBS, extended spaces, etc.* *

Kinetic simulations play an essential role towards understanding the dynamics of plasmas in the fields of nuclear fusion, laser plasma interaction, and astrophysics. The complexity of kinetic computations, in particular, their high dimensionality and multi-scale nature, lead to exciting challenges in physics, applied mathematics, and computer science. For example, modeling the plasma dynamics close to the edge of magnetic fusion devices requires codes which can flexibly handle complex geometries and implement enhanced gyrokinetic models or fully kinetic descriptions. Modern numerical tools such as multi-scale methods, structure-preserving schemes and isogeometric meshes, therefore, need to be adapted to plasma physics models in order to enhance state-of-the-art kinetic codes. At the same time, new programming models are necessary to prepare codes for the use on emerging heterogeneous HPC systems. This includes vectorization, cache efficient memory organization, task-based parallelism as well as new algorithms that are adapted to modern hardware. This minisymposium shall bring together scientists from physics, applied mathematics, and computer science to discuss current trends in the development of (gyro) kinetic codes.

Within materials and cheminformatics, machine learning and inductive reasoning are known for their use in so-called structure property relationships. Despite a long tradition of these methods in pharmaceutical applications their overall usefulness for chemistry and materials science has been limited. Only over the last couple of years, a number of machine learning (ML) studies have appeared with the commonality that quantum mechanical or atomistically resolved properties are being analyzed or predicted based on regression models defined in compositional and configurational space. The atomistic framework is crucial for the unbiased exploration of this space since it enables, at least in principle, the free variation of chemical composition, atomic weights, structure, and electron number. Substantial CPU investments have to be made in order to obtain sufficient training data using atomistic simulation protocol. This minisymposium boasts four of the most active players in the field who share a common background in developing computationally demanding atomistic simulation methods, and who have contributed new and original work based on unsupervised (Ceriotti and Varma) as well as supervised (Ghiringhelli and von Lilienfeld) learning.

In the minisymposium "Parallel Numerical Linear Algebra" we will present two major problems. The first part concentrates on dense eigenvalue solvers and is based on the work of the ELPA-AEO project. The underlying problems are Hermitian generalized eigenvalue problems and the parallel computation of a large part of the spectrum. The talks will present theoretical results and practical implementations. The second topic is the parallel solution of systems of linear equations. Here, the first talk will consider the parallelization of smoothers in Multigrid methods. The second talk will present parallel preconditioners based on ILU; here, the resulting sparse triangular systems are preconditioned and can be solved iteratively to achieve efficient parallel methods.

**Coffee Break**, Foyer

#### PNL01 Beyond Moore's Law

By most accounts, we are nearing the limits of conventional photolithography processes. It will be challenging to continue to shrink feature sizes smaller than 5nm and still realize any performance improvement for digital electronics in silicon. At the current rate of development, the purported “End of Moore’s Law” will be reached in the middle to end of next decade.

Shrinking the feature sizes of wires and transistors has been the driver for Moore’s Law for the past 5 decades, but what might lie beyond the end of current lithographic roadmaps and how will it affect computing as we know it? Moore’s Law is an economic theory after all, and any option that can make future computing more capable each new generation (by some measure) could continue Moore’s economic theory well into the future.

The goal of this panel session is to communicate the options for extending computing beyond the end of our current silicon lithography roadmaps. The correct answers may be found in new ways to extend digital electronics efficiency or capability, or even new models of computation such as neuromorphic and quantum.

**Social evening event (separate registration required)**, Ristorante Ciani

#### Flash Poster Session

The aim of this session is to allow poster presenters to introduce the topic of their poster and motivate the audience to visit them at the evening poster session. Authors will be strictly limited to 40 seconds each - after this time the presentation will be stopped automatically.

**Coffee Break**

**Minisymposia and Papers Sessions**

The computation of bulks of inner eigenpairs of large sparse matrices is known to be both an algorithmic challenge and resource intensive in terms of compute power. As the compute capabilities have continuously increased over the past decades, computational models or applications requiring information about inner eigenstates of sparse matrices have become numerically accessible in many research fields. At the same time new algorithms (e.g. FEAST or SSM) have been introduced and long-standing methods such as filter diagonalization are still being applied, improved and extended. However, the trend towards highly parallel (heterogeneous) compute systems is challenging the efficiency of existing solver packages as well as building block libraries, and calls for new massively parallel solvers with high hardware efficiency across different architectures. Thus, substantial effort is put into the implementation of new sparse (eigen)solver frameworks which face challenges in terms of ease of use, extendibility, sustainability and hardware efficiency. Software engineering and holistic performance engineering concepts are deployed to address these challenges. The significant momentum in the application fields, numerical methods and software layers calls for a strong interaction between scientists involved in those activities to provide sustainable and hardware efficient frameworks to compute inner eigenvalues of large sparse matrices. The minisymposium offers a platform to bring together leading experts in this field to discuss recent developments at all levels: from the application down to hardware efficient implementations of basic kernel operations. Application experts will present their current and upcoming research requiring the computation of inner eigenvalues. State-of-the-art eigensolvers and new algorithmic developments will be discussed along with challenges faced by library developers in terms of software sustainability and hardware efficiency. Many of these topics are not limited to the inner sparse eigenvalue problems but are of general interest for sparse linear algebra algorithms for current and future HPC architectures.

Part 1 of the minisymposium focuses on applications and algorithms.

In this series of two minisymposia, a special focus lies on providing a platform for exchanging ideas about scalable, memory-efficient, fast and resilient solving techniques. These characteristics are crucial for science and engineering-driven applications making use of exascale computing like geophysics, astrophysics, aerodynamics, etc. Algorithms in high-performance computing require a rethinking of the standard approaches to ensure on the one hand the full usage of the future computing power, and on the other hand energy-efficiency. A careful implementation of all performance-relevant parts and an intelligent combination with external libraries is fundamental for exascale computation. Multigrid and domain decomposition play an important role in many scientific applications and have often developed ideas separately using the immense compute power of supercomputers. The minisymposia address both communities and focus on exchanging current research progress related to exascale enabled solving techniques.

Kinetic simulations play an essential role towards understanding the dynamics of plasmas in the fields of nuclear fusion, laser plasma interaction, and astrophysics. The complexity of kinetic computations, in particular, their high dimensionality and multi-scale nature, lead to exciting challenges in physics, applied mathematics, and computer science. For example, modeling the plasma dynamics close to the edge of magnetic fusion devices requires codes which can flexibly handle complex geometries and implement enhanced gyrokinetic models or fully kinetic descriptions. Modern numerical tools such as multi-scale methods, structure-preserving schemes and isogeometric meshes, therefore, need to be adapted to plasma physics models in order to enhance state-of-the-art kinetic codes. At the same time, new programming models are necessary to prepare codes for the use on emerging heterogeneous HPC systems. This includes vectorization, cache efficient memory organization, task-based parallelism as well as new algorithms that are adapted to modern hardware. This minisymposium shall bring together scientists from physics, applied mathematics, and computer science to discuss current trends in the development of (gyro)kinetic codes.

The design of materials for energy production and storage is a subject of great scientific and technological interest and its potential impact on society is very great indeed. The study of such systems is however rather challenging since the systems are complicated. Modeling of such systems poses a challenge since one deals with systems in which reactions take place at surfaces in the presence of highly disordered environments. Car-Parrinello type simulations combined with *ab-initio* methods are of course needed. In this minisymposium we invite specialists to discuss the perculiar challenges in this field. The issues that we expect to cover will be oxidation processes in solution, electron transfer, morphology and chemistry at the interfaces.

The main objective of this symposium is to bring international scientists together working in the area of particle-based modeling with applications in Life Sciences, Fluids, and Materials. Numerical methods include but are not restricted to Coarse-Graining Molecular Dynamics (CG-MD), Dissipative Particle Dynamics (DPD), Smoothed Dissipative Particle Dynamics (SDPD), Smoothed Particle Hydrodynamics (SPH), Lattice-Boltzmann Method (LBM), Moving Particle Semi-Implicit Method (MPS), Brownian Dynamics (BD) and Stokesian Dynamics (SD). The goal of minisymposium is, on one hand, to share state-of-the-art results in various applications of particle-based methods and, on the other to discuss technical issues of the computational modeling.

Progress in weather and climate modeling is tightly linked to the increase in computing resources available for such models. Emerging heterogeneous high-performance architectures are a unique opportunity to address these requirements in an energy- and time-efficient manner. The hardware changes of emerging computing platforms are accompanied by dramatic changes in programming paradigms. These changes have only just started. Adapting current weather and climate codes to efficiently exploit such architectures requires an effort which is both costly and error-prone. The long software lifecycles of weather and climate codes renders the situation even more critical, as hardware life cycles are much shorter in comparison. Furthermore, atmospheric models are developed and used by a large variety of researchers on a myriad of computing platforms, which makes portability a crucial requirement in any kind of development. Developers of weather and climate models are struggling to achieve a better separation of concerns in order to separate the high-level specification of equations and solution algorithms from the hardware dependent, optimized low-level implementation. It is probable that the solutions to achieve this will be different for different parts of the codes, due to different predominant algorithmic motifs and data-structures. On the example of concrete porting efforts, this session will illustrate different approaches used today and (possibly) in the future.

The ability to computationally design, optimize, or understand the properties of energy relevant materials is fundamentally contingent on the existence of methods to accurately, efficiently and reliably simulate them. Quantum mechanics based approaches must necessarily serve as a foundational role, since only these approaches can describe matter in a truly first-principles (parameter-free) and therefore robust manner. Quantum Monte Carlo (QMC) methods are ideal candidates for this since they robustly deliver highly accurate calculations of complex materials, and with increased computer power provide systematically improvable accuracies that are not possible with other first principles methods. By directly solving the Schrödinger equation and by treating the electrons at a consistent many-body level, these methods can be applied to general elements and materials, and are unique in satisfying robust variational principles. More accurate solutions result in lower variational energies, enabling robust confidence intervals to be assigned to predictions. The stochastic nature of QMC facilitates mapping onto high-performance computing architectures. QMC is one of the few computational materials methods capable of fully exploiting today’s petaflop machines.

This symposium will present some of the latest developments on QMC methods, from an application and a development perspective.

Human fertility is based on physiological events like adequate follicle maturation, ovulation, ovum fertilization, corpus luteum formation as well as endometrial implantation, proceeding in a chronological order. Diseases such as endometriosis or the polycystic ovary syndrome seriously disturb menstrual cycle patterns, oocyte maturation and consequently fertility. Besides endocrine diseases, several environmental and lifestyle factors, especially smoking and obesity, also have a negative impact on fertility. Modern techniques in reproductive medicine like in-vitro fertilization or intracytoplasmic sperm injection have increased the chances for successful reproduction. However, current success rates vary significantly among clinics, still reaching only about 35% even in well-functioning centers. This is mainly due to the usage of different treatment protocols and limited knowledge about individual variability in the dynamics of reproductive processes.

This minisymposium brings together researchers with different scientific backgrounds (computer science, mathematics, medicine) who work on developing model-based clinical decision support systems for reproductive endocrinologists enabling the simulation and optimization of treatment strategies in-silico. Virtual physiological human (VPH) models together with patient specific parameterizations (Virtual Patients), formalized treatment strategies (Virtual Doctors) and software tools (Virtual Hospital) enable in silico clinical trials, that is clinical trials performed by means of computer simulations over a population of virtual patients. In silico clinical trials are recognized as a disruptive key innovation for medicine, as they allow medical scientists to reduce and postpone invasive, risky, costly and time-consuming in vivo experiments of new treatments to much later stages of the testing process, when a deeper knowledge of their effectiveness and side-effects has been acquired via simulations.

The talks in the minisymposium will highlight different aspects of a virtual hospital in reproductive medicine.

- Distributed service oriented systems for clinical decision support

- HPC within in silico clinical trials

- Construction of large virtual patient populations

- Formalization of treatment strategies in silico

- VPH model validation, treatment verification and the design of individualized protocols

- Databases and software for the virtual hospital

- Large scale integrated physiology models

**Lunch**, Foyer

**Minisymposia and Papers Sessions**

Interfaces (solid-solid, solid-liquid, solid-gas as well as liquid-gas) give a variety of interesting and crucial functions in condensed-matter physics and chemistry. The space-charge layer plays an important role in semiconductor physics, leading to several fundamentals of electronic devices. On the other hand, the electrochemistry always takes into account the electric double layer, crucial for catalysis, solar cell and battery applications. These modulations of charge-carrier distributions can expand up to micrometer scale, though nanometer-scale modulation happens as well in several cases. Therefore, sole first-principles electronic structure calculation does not work well. For these issues, special techniques to deal with the interface are necessary, on top of the large-scale and long-time QM-based simulations. QM/MM techniques or combinations of QM and continuum or classical theories can be potential solutions. This minisymposium brings together cutting-edge researchers working on these issues to discuss and evaluate individual methods and suggest future directions. This is important since relationships among the methods for the interfaces are difficult to see. Besides, the material-science flavor of this minisymposium provides perspectives addressing computer scientists and applied mathematicians, which will encourage future developments of interdisciplinary techniques.

With a mass larger than that of the Sun compressed into an almost perfect sphere with a radius of only a dozen kilometers, neutron stars are the most compact material astrophysical objects we know. In their cores, particles are squeezed together more tightly than in atomic nuclei and no terrestrial experiment can reproduce the extreme physical conditions of density, temperature, and gravity. With such properties, it is clear that neutron stars in binary systems are unique laboratories to explore fundamental physics – such as the state of matter at nuclear densities – and fundamental astrophysics – such as the one behind the “central engine” in short gamma-ray bursts. Yet, such an exploration does not come easy. The nonlinear dynamics of binary neutron stars, which requires the combined solution of the Einstein equations together with those of relativistic hydrodynamics and magnetohydrodynamics, and the complex microphysics that accompanies the inspiral and merger, make sophisticated numerical simulations in three dimensions the only route for an accurate modeling.

This minisymposium will focus on the gravitational-wave emission during the inspiral and the connection between merging binaries and the corresponding electromagnetic counterpart. These two problems require urgent attention as they are both likely to play an important role in the imminent detection of gravitational waves from binary neutron stars by interferometric detectors such as LIGO and Virgo.

The computation of bulks of inner eigenpairs of large sparse matrices is known to be both an algorithmic challenge and resource intensive in terms of compute power. As the compute capabilities have continuously increased over the past decades, computational models or applications requiring information about inner eigenstates of sparse matrices have become numerically accessible in many research fields. At the same time new algorithms (e.g. FEAST or SSM) have been introduced and long-standing methods such as filter diagonalization are still being applied, improved and extended. However, the trend towards highly parallel (heterogeneous) compute systems is challenging the efficiency of existing solver packages as well as building block libraries and calls for new massively parallel solvers with high hardware efficiency across different architectures. Thus, substantial effort is put into the implementation of new sparse (eigen)solver frameworks which face challenges both in terms easy to use, extendibility, sustainability, and hardware efficiency. Software engineering and holistic performance engineering concepts are deployed to address these challenges. The significant momentum in the application fields, numerical methods, and software layers calls for a strong interaction between scientists involved in those activities to provide sustainable and hardware efficient frameworks to compute inner eigenvalues of large sparse matrices. The minisymposium offers a platform to bring together leading experts in this field to discuss recent developments at all levels: from the application down to hardware efficient implementations of basic kernel operations. Application experts will present their current and upcoming research fields requiring the computation of inner eigenvalues. State-of-the-art eigensolvers and new algorithmic developments will be discussed along with challenges faced by library developers in terms of software sustainability and hardware efficiency. Many of these topics are not limited to the inner sparse eigenvalue problems but are of general interest for sparse linear algebra algorithms for current and future HPC architectures.

Part 2 of the minisymposium focuses on algorithms as well as software and performance aspects.

Modeling and simulation of problems in cardiovascular mechanics can contribute significantly to the development of the field of precision medicine for cardiovascular and systemic phenomena. The relevant models of the related multiphysics problems can only be numerically simulated through the efficient use of modern techniques from computational mathematics, mechanics, and high-performance computing. Problems addressed in this minisymposium include reentry dynamics in cardiac electromechanical models, early atherosclerosis progression, and fluid-structure interaction using realistic arterial wall material models. This minisymposium aims at gathering researchers and experts in computational modeling and simulation of the heart and the systemic circulation.

Kinetic simulations play an essential role towards understanding the dynamics of plasmas in the fields of nuclear fusion, laser plasma interaction, and astrophysics. The complexity of kinetic computations, in particular, their high dimensionality and multi-scale nature, lead to exciting challenges in physics, applied mathematics, and computer science. For example, modeling the plasma dynamics close to the edge of magnetic fusion devices requires codes which can flexibly handle complex geometries and implement enhanced gyrokinetic models or fully kinetic descriptions. Modern numerical tools such as multi-scale methods, structure-preserving schemes, and isogeometric meshes, therefore, need to be adapted to plasma physics models in order to enhance state-of-the-art kinetic codes. At the same time, new programming models are necessary to prepare codes for the use on emerging heterogeneous HPC systems. This includes vectorization, cache efficient memory organization, task-based parallelism as well as new algorithms that are adapted to modern hardware. This minisymposium shall bring together scientists from physics, applied mathematics, and computer science to discuss current trends in the development of (gyro) kinetic codes.

This minisymposium will bring together researchers who use molecular simulation in their respected fields, in order to discuss recent advances and to exchange experiences and ideas. The focus of this minisymposium is the analysis of very huge biological and chemical data sets arising from the simulation of complex molecular systems, by means of developing efficient algorithms and their implementation on high-performance supercomputers.

This approach is necessary for designing smart drug-like molecules for Precision Medicine. The tools include, but are not limited to, algebraic stochastic dimension reduction methods such as nonnegative matrix decomposition for very large data sets obtained from atomic spectroscopy, Markov State Models (MSMs), Multiscale Methods in Time and Space for studying molecular conformation, PDEs for the analysis of multivalent binding kinetics for biochemical systems, and spectral clustering.

The main objective of this symposium is to bring international scientists together working in the area of particle-based modeling with applications in Life Sciences, Fluids, and Materials. Numerical methods include but are not restricted to Coarse-Graining Molecular Dynamics (CG-MD), Dissipative Particle Dynamics (DPD), Smoothed Dissipative Particle Dynamics (SDPD), Smoothed Particle Hydrodynamics (SPH), Lattice-Boltzmann Method (LBM), Moving Particle Semi-Implicit Method (MPS), Brownian Dynamics (BD) and Stokesian Dynamics (SD). The goal of minisymposium is, on one hand, to share state-of-the-art results in various applications of particle-based methods and, on the other to discuss technical issues of the computational modeling.

This minisymposium lies at the interface of computer science and applied mathematics, presenting recent advances in methods, ideas and algorithms addressing resilience for extreme-scale computing.

Extreme scale systems are expected to exhibit more frequent system faults due to both hardware and software, making resilience a key problem to face. On the hardware side, challenges will arise due to the expected increase in the number of components, variable operational modes (e.g. lower voltage to address energy requirements), and increasing complexity (e.g. memory hierarchies, heterogeneous cores, more, smaller transistors). The software stack will need to keep up with the increasing hardware complexity, hence becoming itself more error-prone.

In general, we can distinguish between three main categories of faults, namely hard (where a hardware component fails and needs to be fixed/replaced), soft/transient (a fault occurs, but is corrected by the hardware or low-level system software), and silent/undetectable (an error occurs but cannot be detected and fixed). The first two categories have a well-defined impact on the run and the system itself. The third class is more subtle because its effect is simply to alter stored, transmitted, or processed information, and there is no opportunity for an application to directly recover from a fault. This can lead to noticeable impacts such as crashes and hangs, as well as corrupted results.

Current systems do not have an integrated approach to fault tolerance, namely the various subsystems have their own mechanisms for error detection and recovery (e.g. ECC memory). Also, there is no good error isolation, e.g. the failure of any component in a parallel job generally causes the entire job to fail. In fact, the current standard of the Message Passing Interface (MPI) system does not support failing ranks. Common approaches for fault-tolerance include hardware-level redundancy, algorithmic error correction code and checkpoint/restart. The latter is currently the most widely used approach. However, the tight power budget targeted for future systems and the expected shortening of the mean-time-between-failures (MTBF) may cause it to become an unfeasible solution for extreme-scale computing.

It is increasingly more recognized that hardware-only resilience will likely become unfeasible in the long term. This sets the need to develop an integrated approach, where resilience is tackled across all layers to mitigate the impact of faults in a holistic fashion, keeping under consideration its interplay with the energy budget. Hence, in parallel to the continuous effort aimed at improving resilience for hardware and system software, new approaches/ideas need to be incorporated at the highest level, i.e. algorithms and applications, to account for potential faults, e.g. silent data corruptions (SDCs). In other words, algorithms themselves need to be made more robust and resilient.

This minisymposium explores HPC resilience in the context of algorithms, applications, hardware, system and runtimes. Specifically, the talks have been selected to cover topics ranging, e.g., from solvers, programming models and energy-aware computing, to approximate computing, memory vulnerability and post-Moore’s.

**Coffee Break**, Foyer

#### PNL02 Sustainable Software Development and Publication Practices in the Computational Sciences

The goal of the PASC papers program is to advance the quality of formal scientific communication between the relative disciplines of computational science and engineering. The program was built from an observation that the computer science community traditionally publishes in the proceedings of major, international conferences, while the domain science community generally publishes in discipline-specific journals – and cross readership is very limited. The aim of our initiative is to build and sustain a platform that enables engagement between the computer science, applied mathematics and domain science communities, through a combination of conference participation, conference papers and post-conference journal publications. The PASC papers initiative allows authors to benefit from the interdisciplinarity and rapid dissemination of results afforded by the conference venue, as well as from the impact associated with subsequent publication in a high-quality scientific journal. To help facilitate such journal publication, PASC has recently formed collaborative partnerships with a number of scientific journals, including Computer Physics Communications (CPC), the Journal of Advances in Modeling Earth Systems (JAMES), and ACM Transactions on Mathematical Software (ACM TOMS). In this panel discussion, representatives from these journals are invited to express their thoughts regarding publication practices in the computational sciences, including the publication of software codes. We will discuss best practices for sustainable software development and address questions such as: how can we ensure that code and infrastructure will still be there in ten-plus years? How can we validate published results and guarantee reproducibility? Finally, we will describe our vision for the PASC papers initiative going forward.

Panelists:

- Thomas Schulthess (CSCS / ETH Zurich, Switzerland)

- Walter Dehnen (University of Leicester): Editor (Astronomy and Astrophysics) for Computer Physics Communications (CPC)

- Robert Pincus (University of Colorado): Editor in Chief of the Journal of Advances in Modeling Earth Systems (JAMES)

- Michael A. Heroux (Sandia National Laboratories): Associate Editor (Replicated Computational Results) for ACM Transactions on Mathematical Software (TOMS)