Linear elastodynamics is a physically and mathematically well-understood problem, and numerical techniques have been successfully developed and applied for decades. However, the exuberant scale of the problem reaching 10^12 degrees of freedom, as well as the multi-scale complexity of the underlying parameter space render seismic applications as some of the most challenging HPC problems in the physical sciences. This is particularly the case for the inverse problem in mapping observations to parameters using millions of observations. Technical bottlenecks arise on many ends: meshing complex geological 3D structures, scalability, adaptivity to emerging architectures, data infrastructure for millions of simulations, provenance, code usability.
In this minisymposium, we will hear about a diverse range of topics covering state-of-the-art wave propagation at scales between the globe and the human body, each with specifically adaptive techniques and their HPC solutions. Many of the talks will be based on different variants of the spectral-element technique, which has been dominating large-scale seismology for the past two decades. Novel variants include its scalable adaptation to tetrahedra, a new flexible implementation in C++, coupling to pseudo-spectral approaches, and scaling on emerging architectures. Other techniques will be the discontinuous Galerkin method for dynamic earthquake rupture, and an immersive approach to couple numerical modeling with wave tank experiments on FPGAs.
Many of the talks will be driven by requirements from specific applications such as nonlinear earthquake rupture dynamics in a complex 3D geological fault system, multiscale geological structures at scales reaching the deep Earth interior, wave tank experiments, seismic tomography at large and industrial scales, and an application to breast cancer detection using ultrasound.
In the discussion, we will strive to identify common bottlenecks, ideas to adapt to emerging architectures, any possible basic set of common algorithmic solutions, and discuss where one could move forward to consolidate different approaches based on any commonalities such as meshing, MPI-approaches, data infrastructures, or numerical solvers.
Schedule
In this series of two minisymposia, a special focus lies on providing a platform for exchanging ideas about scalable, memory-efficient, fast and resilient solving techniques. These characteristics are crucial for science and engineering-driven applications making use of exascale computing like geophysics, astrophysics, aerodynamics, etc. Algorithms in high-performance computing require a rethinking of the standard approaches to ensure on the one hand the full usage of the future computing power, and on the other hand energy-efficiency. A careful implementation of all performance-relevant parts and an intelligent combination with external libraries is fundamental for exascale computation. Multigrid and domain decomposition play an important role in many scientific applications and have often developed ideas separately using the immense compute power of supercomputers. The minisymposia address both communities and focus on exchanging current research progress related to exascale enabled solving techniques.
The language of linear algebra is ubiquitous across scientific and engineering disciplines and is used to describe phenomena and algorithms alike. The translation of linear algebra expressions to high-performance code is a surprisingly challenging problem, requiring knowledge in high-performance computing, compilers, and numerical linear algebra. Typically, the user is offered two contrasting alternatives: either high-level languages (e.g. Matlab), which enable fast prototyping at the expense of performance, or low-level languages (C and Fortran), which allow for highly efficient solutions at the expense of extremely long development cycles. This workshop brings together domain specialists who believe that productivity and high-performance need not be mutually exclusive.
At the beginning of the current century we are facing massive challenges due to the increasing global demand for energy, focused on two major issues. On one hand, conventional fossil fuel resources such as oil, natural gas, and coal are limited and dwindling. On the other hand, the emissions due to combustion of fossil fuels evidently impact the chemical composition of our atmosphere, leading to adverse effects on the climate and environment. These inevitable global challenges imminently demand for technological advances in energy conversion, storage and transport. The search for novel materials for energy applications has recently become an extremely active area of research worldwide to address these issues, with efforts in chemistry, solid-state physics, and materials science, via the Materials Genome Initiative in the US and related initiatives in other countries. In this search, computational tools are being actively developed not only to explore uncharted chemical space of new materials, but also to understand the complex interplay of materials properties with the underlying crystal structures.
One particular class of materials for energy applications are thermoelectric materials, which are required to drive thermoelectric generators that allow for a reliable, clean, emission-free conversion of (waste) heat into electricity. Until the mid-1990s, thermoelectrics had been considered inefficient and not economically relevant, but with enhanced structural engineering and intense research on novel complex materials the interest for thermoelectric materials has been recently revived. The efficiency of a thermoelectric material is governed by the so-called figure of merit zT, which is maximized by increasing the thermopower and electrical conductivity while reducing the thermal conductivity. These materials properties are however strongly interrelated, e.g. in most materials the thermal and electrical conductivities correlate with each other through the Wiedemann-Franz-law. Hence, the search for a material with a maximal zT poses a non-trivial materials design challenge.
This symposium aims at bringing together scientists to share their computational efforts in thermoelectric materials development. An accurate description of the bulk lattice thermal transport, which is governed by phonon-phonon interactions, demands advanced simulation techniques and large HPC infrastructures. Solving the Boltzmann phonon transport equation requires the knowledge of the anharmonic energy contributions which give rise to phonon scattering, posing one of the computationally most demanding aspects in modeling thermal resistivity. Density functional perturbation theory and finite difference methods are currently state-of-the-art approaches, but remain computationally highly demanding. Furthermore, the interactions of phonons with electrons become increasingly important at elevated temperatures and have recently been the focus of research in thermoelectric materials. Finally, methods for modeling of transport properties on a large scale are required for the discovery of new materials with improved thermoelectric properties. The focus of this symposium will be on novel approaches, amongst others based on methods from machine learning, signal processing, and high-throughput techniques, in modeling transport properties to advance in silico discovery of thermoelectric materials.
The simulation of turbulent flows in engineering applications is often characterized by high Reynolds number, physical processes that depend on length scales that are too small to be resolved, and complex geometry. The advances in computing hardware notwithstanding, it is becoming clear that large eddy simulation (LES) of such flows, where the resolved/filtered scales of motion are evolved, and the unresolved scaled are modeled, is still intractable. The large number of grid points required for a well-resolved simulation of the flow physics places a greater need for modeling the unresolved scales and evolving the resolved scales in a manner whereby the dissipative and dispersive errors are minimized. Given that in a LES, the errors in the solution are a combination of filtering errors that are characterized by the filter width (delta), the numerical order of accuracy, which is characterized by the cell width (h), and the sub-grid scale (SGS) modeling error, an assessment of the resulting “solution” is complicated by the fact that it is difficult to isolate the effects of each of these contributing factors. This area of research offers opportunities to quantify the tradeoffs between the computational advantages offered by higher-order numerical methods and the turbulence resolving capabilities of these numerical methods in the context of LES of high Reynolds number flows. On emerging exascale computing platforms, where the available power is capped at 20MW, the architecture is beginning to be characterized by processors with a large number of cores that run at dynamic clock speeds that decrease as the cores begin to overheat, deep memory hierarchies with less on-chip memory, and multiple pathways to parallelizing algorithms, ranging from coarse-grained parallelism (MPI) to fine-grained parallelism (threads, vectorization). The reduced on-chip memory requires time consuming operations to fetch data from external (off-chip) memory to local cache in order to make computations possible! On such platforms, the traditional measure of assessing the efficiency of parallel codes, via FLOPs alone, is being replaced by the more meaningful arithmetic intensity (AI), which is defined as the ratio of FLOPs to the number of load-store operations. For turbulence simulations, it would appear that higher-order numerical methods that are less memory bandwidth limited may offer an obvious advantage. However, despite a veritable body of literature that documents the advantages of higher-order methods when applied to problems that are ideal – in the sense that the cases to which they have been applied are those where one has a fairly high degree of control on the inflow and boundary conditions, and the geometry of the computational domain – it is not quite clear if these methods could serve as the gold standard when one applies higher-order methods to drive the compute engine in a predictive flow simulation tool, with considerable uncertainties in the flow conditions and complexities in the geometry. The focus of this minisymposium, therefore, is a presentation of higher-order numerical discretizations, their impacts on the resolvable turbulent flow physics, and the scaling and parallel performance of higher-order discretizations on emerging computing hardware.
The recent observation of different tiers of TOP 500 show a growing impact of computing systems based on architectural features such as: more complex memory hierarchies incorporating fast yet limited memory per core, addition of large capacity non-volatile memory, substantial increase in cores per shared-memory "island", and the closer and closer integration of high-performance interconnects with the CPU and memory subsystem. Application codes that want to take advantage of such systems need to reshape to achieve good levels of performance. At the same time, it is important to ensure that codes can be maintained and developed further without undue complexity imposed by the execution systems. Moreover, significant advances in the field of reconfigurable computing and their system integration have generated major interest: new generations provide highly efficient FP units, and fast cache-coherent interconnects to CPUs were announced. On the SW side, the momentum around OpenCL is lowering the entry barriers. Tighter Integration of FPGAs and CPUs will allow traditional FPGA workloads to get closer to the more "general purpose" server and require less specialized custom boards.
These emerging computing architectures generate innovative developments in computer science and applied mathematics, which in return, enable new capacities for scientific applications. The minisymposium will be an excellent opportunity to share some of the most recent, leading edge advances for scientific applications enabled by these trends. New algorithmic developments, and how to code them on the emerging architectures will be at the heart of the session: new PDE solvers for QCD, efficient implementation of Fast Multiple Methods applied to biomolecular simulations, how to use the cache-aware-roofline model to guide the performance optimization, and high energy physics workloads projecting efficient usage of nodes combining Xeon and FPGA.
The applied life sciences are of huge importance, both economically (the pharmaceutical sector alone supplies something like 30% of the exports of Switzerland), and in terms of tackling societal challenges such as aging populations, the increasing burden of chronic diseases, and spiraling costs of health care. The challenges in the industry are many and varied, and include the need for a much better understanding of why certain promising drugs fail trials, how best to identify and model sub-groups in patient populations to deliver on the promises of precision medicine, and how to integrate information and models ranging from the molecular level up to patient-worn sensors.
Due to their importance, the applied life sciences should be supported with the best possible tools to tackle these challenges. Whilst the use of computing is reasonably well established in this sector, the use of High-Performance Computing (HPC) is much less well established when compared to other sectors such as engineering or physics. This is indeed unfortunate given the potential of in silico experiments and analysis of complex data to advance the state of the art in the field and deliver concrete benefits to society.
In this minisymposium we will explore various approaches to the use of computing for the applied life sciences, ranging from lower-level systems modeling, through the application of large-scale machine learning to high volume screens in drug discovery, to analysis of genomic information. Each of these has different modeling and scaling challenges and has had varying levels of success in the application of HPC to the problem. We will have speakers from industry, academia and industrial-academic collaborations alike, giving varying perspectives on the state of the art and the potential for the application of HPC.
The specific areas covered by the speakers will be the following:
- HPC implementation of multi-target compound activity prediction in chemogenomics based on state of the art large scale machine learning techniques
- Challenges in data handling and computation for the analysis of DNA for personalized healthcare
- Systems biology and HPC
Linear elastodynamics is a physically and mathematically well-understood problem, and numerical techniques have been successfully developed and applied for decades. However, the exuberant scale of the problem reaching 10^12 degrees of freedom, as well as the multi-scale complexity of the underlying parameter space render seismic applications as some of the most challenging HPC problems in the physical sciences. This is particularly the case for the inverse problem in mapping observations to parameters using millions of observations. Technical bottlenecks arise on many ends: meshing complex geological 3D structures, scalability, adaptivity to emerging architectures, data infrastructure for millions of simulations, provenance, code usability.
In this minisymposium, we will hear about a diverse range of topics covering state-of-the-art wave propagation at scales between the globe and the human body, each with specifically adaptive techniques and their HPC solutions. Many of the talks will be based on different variants of the spectral-element technique, which has been dominating large-scale seismology for the past two decades. Novel variants include its scalable adaptation to tetrahedra, a new flexible implementation in C++, coupling to pseudo-spectral approaches, and scaling on emerging architectures. Other techniques will be the discontinuous Galerkin method for dynamic earthquake rupture, and an immersive approach to couple numerical modeling with wave tank experiments on FPGAs.
Many of the talks will be driven by requirements from specific applications such as nonlinear earthquake rupture dynamics in a complex 3D geological fault system, multiscale geological structures at scales reaching the deep Earth interior, wave tank experiments, seismic tomography at large and industrial scales, and an application to breast cancer detection using ultrasound.
In the discussion, we will strive to identify common bottlenecks, ideas to adapt to emerging architectures, any possible basic set of common algorithmic solutions, and discuss where one could move forward to consolidate different approaches based on any commonalities such as meshing, MPI-approaches, data infrastructures, or numerical solvers.
This minisymposium will focus on computational approaches to simulate tissue dynamics. Recent advances in algorithms, hardware, and microscopy enable more sophisticated and realistic simulations of tissue dynamics. A variety of simulation frameworks are being developed to capture different aspects of tissue dynamics. Each method has its advantages and disadvantages in terms of resolution, realism, and computational efficiency. This minisymposium will present a variety of state-of-the-art methods and their applications in biology.
The four talks will present interface-capturing methods such as the phase-field method, vertex models, as well as LBIBCell, a simulation framework that permits tissue simulations at cellular resolution by combining the Lattice-Boltzmann method for fluid and reaction dynamics with an immersed boundary condition to capture the elastic properties of tissues and to permit fluid-structure interactions.
The minisymposium will thereby offer an overview on state-of-the-art approaches to tissue simulations, and highlight recent advances and remaining challenges.
The complexities and nature of fluid flows imply that the resources needed to computationally model problems of industrial and academic relevance are virtually unbounded. CFD simulations, therefore, are a natural driver for exascale computing and have the potential for substantial societal impact, like reduced energy consumption, alternative sources of energy, improved health care, and improved climate models. Extreme scale CFD poses several cross-disciplinary challenges e.g. algorithmic issues in scalable solver design, handling of extreme sized data with compression and in-situ analysis, resilience and energy awareness in both hardware and algorithm design. The wide range of topics makes exascale CFD relevant to a wider HPC audience, extending outside the traditional fluid dynamics community.
This minisymposium will be organized by the EU funded Horizon 2020 project ExaFLOW together with leading CFD experts from industry and will feature presentations showcasing their work on addressing key algorithmic challenges in CFD in order to facilitate simulations at exascale, e.g. accurate and scalable solvers, strategies to ensure fault tolerance and resilience. This session aims at bringing together the CFD community as a whole, from HPC experts to domain scientists, to discuss current and future challenges towards exascale fluid dynamics simulations and to facilitate international collaboration.
Weather and climate prediction centers face enormous challenges due to the rising cost of energy associated with running complex high-resolution forecast models on more and more processors and the likelihood that Moore's law will soon reach its limit, with microprocessor feature density (and performance) no longer doubling every two years. But the biggest challenge to state-of-the-art computational services arises from its own software productivity shortfall. The application software at the heart of all prediction services throughout Europe is ill-equipped to efficiently adapt to the rapidly evolving heterogeneous hardware provided by the supercomputing industry. The solution is not to reduce the stringent requirements for Earth-system prediction but to combine scientific and computer-science expertise for defining and co-designing the necessary steps towards affordable, exascale high-performance simulations of weather and climate. The Energy-efficient and Scalable Algorithms for weather Prediction and Exascale (ESCAPE) projects brings together a consortium of weather prediction centres operating at global as well as European regional scales, university institutes performing research on numerical methods and novel code optimization techniques, HPC centres with vast experience in scalable code development and diverse processor technologies, large HPC hardware vendor companies operating market leading systems, as well as a European start-up SME with novel and emerging optical processor technologies to address the challenge of extreme-scale, energy-efficient high-performance computing. Key objectives of ESCAPE are to (i) define fundamental algorithm building blocks (“weather & climate dwarfs”) to foster trans-disciplinary research and innovation and to co-design, advance, benchmark and efficiently run the next generation of NWP and climate models on energy-efficient, heterogeneous HPC architectures, to (ii) diagnose and classify weather and climate dwarfs on different HPC architectures, and to (iii) combine frontier research on algorithm development and extreme-scale, high-performance computing applications with novel hardware technology, to create a flexible and sustainable weather and climate prediction system. This minisymposium will present the current state of prediction model component developments of weather and climate dwarfs within and beyond ESCAPE, and the implications on performance and employed programming models. This session acts in close collaboration with the minisymposium on ‘Programming models and abstractions for weather and climate models: Today and in the futures’.
Isogeometric Analysis (IgA) is a recent but well established method for the analysis of problems governed by differential equations. Its goal is to reduce the gap between the worlds of Finite Element Analysis (FEA) and Computer Aided Design (CAD). One of the key ideas in IgA is to use a common spline representation model for the design as well as for the analysis, providing a true design-through-analysis methodology.
The IgA approach has been proved to be superior with respect to conventional FEA in various engineering application areas, including structural mechanics, electromagnetism, fluid-structure interaction. The keystones of this success are the many outstanding properties of the considered spline spaces and the associated B-spline basis. Spline representations allow for an efficient (geometric) manipulation, a high approximation power with respect to their degrees of freedom, appealing spectral properties, fast numerical linear algebra methods depending on spectral properties and/or tensor techniques.
The minisymposium will address the most recent research directions and results related to
1) analysis of spectral properties in concrete applications
2) fast numerical linear algebra methods in connection with Bsplines, NURBS, extended spaces, etc.
Kinetic simulations play an essential role towards understanding the dynamics of plasmas in the fields of nuclear fusion, laser plasma interaction, and astrophysics. The complexity of kinetic computations, in particular, their high dimensionality and multi-scale nature, lead to exciting challenges in physics, applied mathematics, and computer science. For example, modeling the plasma dynamics close to the edge of magnetic fusion devices requires codes which can flexibly handle complex geometries and implement enhanced gyrokinetic models or fully kinetic descriptions. Modern numerical tools such as multi-scale methods, structure-preserving schemes and isogeometric meshes, therefore, need to be adapted to plasma physics models in order to enhance state-of-the-art kinetic codes. At the same time, new programming models are necessary to prepare codes for the use on emerging heterogeneous HPC systems. This includes vectorization, cache efficient memory organization, task-based parallelism as well as new algorithms that are adapted to modern hardware. This minisymposium shall bring together scientists from physics, applied mathematics, and computer science to discuss current trends in the development of (gyro) kinetic codes.
Within materials and cheminformatics, machine learning and inductive reasoning are known for their use in so-called structure property relationships. Despite a long tradition of these methods in pharmaceutical applications their overall usefulness for chemistry and materials science has been limited. Only over the last couple of years, a number of machine learning (ML) studies have appeared with the commonality that quantum mechanical or atomistically resolved properties are being analyzed or predicted based on regression models defined in compositional and configurational space. The atomistic framework is crucial for the unbiased exploration of this space since it enables, at least in principle, the free variation of chemical composition, atomic weights, structure, and electron number. Substantial CPU investments have to be made in order to obtain sufficient training data using atomistic simulation protocol. This minisymposium boasts four of the most active players in the field who share a common background in developing computationally demanding atomistic simulation methods, and who have contributed new and original work based on unsupervised (Ceriotti and Varma) as well as supervised (Ghiringhelli and von Lilienfeld) learning.
In the minisymposium "Parallel Numerical Linear Algebra" we will present two major problems. The first part concentrates on dense eigenvalue solvers and is based on the work of the ELPA-AEO project. The underlying problems are Hermitian generalized eigenvalue problems and the parallel computation of a large part of the spectrum. The talks will present theoretical results and practical implementations. The second topic is the parallel solution of systems of linear equations. Here, the first talk will consider the parallelization of smoothers in Multigrid methods. The second talk will present parallel preconditioners based on ILU; here, the resulting sparse triangular systems are preconditioned and can be solved iteratively to achieve efficient parallel methods.
PNL01 Beyond Moore's Law
By most accounts, we are nearing the limits of conventional photolithography processes. It will be challenging to continue to shrink feature sizes smaller than 5nm and still realize any performance improvement for digital electronics in silicon. At the current rate of development, the purported “End of Moore’s Law” will be reached in the middle to end of next decade.
Shrinking the feature sizes of wires and transistors has been the driver for Moore’s Law for the past 5 decades, but what might lie beyond the end of current lithographic roadmaps and how will it affect computing as we know it? Moore’s Law is an economic theory after all, and any option that can make future computing more capable each new generation (by some measure) could continue Moore’s economic theory well into the future.
The goal of this panel session is to communicate the options for extending computing beyond the end of our current silicon lithography roadmaps. The correct answers may be found in new ways to extend digital electronics efficiency or capability, or even new models of computation such as neuromorphic and quantum.
Flash Poster Session
The aim of this session is to allow poster presenters to introduce the topic of their poster and motivate the audience to visit them at the evening poster session. Authors will be strictly limited to 40 seconds each - after this time the presentation will be stopped automatically.
The computation of bulks of inner eigenpairs of large sparse matrices is known to be both an algorithmic challenge and resource intensive in terms of compute power. As the compute capabilities have continuously increased over the past decades, computational models or applications requiring information about inner eigenstates of sparse matrices have become numerically accessible in many research fields. At the same time new algorithms (e.g. FEAST or SSM) have been introduced and long-standing methods such as filter diagonalization are still being applied, improved and extended. However, the trend towards highly parallel (heterogeneous) compute systems is challenging the efficiency of existing solver packages as well as building block libraries, and calls for new massively parallel solvers with high hardware efficiency across different architectures. Thus, substantial effort is put into the implementation of new sparse (eigen)solver frameworks which face challenges in terms of ease of use, extendibility, sustainability and hardware efficiency. Software engineering and holistic performance engineering concepts are deployed to address these challenges. The significant momentum in the application fields, numerical methods and software layers calls for a strong interaction between scientists involved in those activities to provide sustainable and hardware efficient frameworks to compute inner eigenvalues of large sparse matrices. The minisymposium offers a platform to bring together leading experts in this field to discuss recent developments at all levels: from the application down to hardware efficient implementations of basic kernel operations. Application experts will present their current and upcoming research requiring the computation of inner eigenvalues. State-of-the-art eigensolvers and new algorithmic developments will be discussed along with challenges faced by library developers in terms of software sustainability and hardware efficiency. Many of these topics are not limited to the inner sparse eigenvalue problems but are of general interest for sparse linear algebra algorithms for current and future HPC architectures.
Part 1 of the minisymposium focuses on applications and algorithms.
In this series of two minisymposia, a special focus lies on providing a platform for exchanging ideas about scalable, memory-efficient, fast and resilient solving techniques. These characteristics are crucial for science and engineering-driven applications making use of exascale computing like geophysics, astrophysics, aerodynamics, etc. Algorithms in high-performance computing require a rethinking of the standard approaches to ensure on the one hand the full usage of the future computing power, and on the other hand energy-efficiency. A careful implementation of all performance-relevant parts and an intelligent combination with external libraries is fundamental for exascale computation. Multigrid and domain decomposition play an important role in many scientific applications and have often developed ideas separately using the immense compute power of supercomputers. The minisymposia address both communities and focus on exchanging current research progress related to exascale enabled solving techniques.
Kinetic simulations play an essential role towards understanding the dynamics of plasmas in the fields of nuclear fusion, laser plasma interaction, and astrophysics. The complexity of kinetic computations, in particular, their high dimensionality and multi-scale nature, lead to exciting challenges in physics, applied mathematics, and computer science. For example, modeling the plasma dynamics close to the edge of magnetic fusion devices requires codes which can flexibly handle complex geometries and implement enhanced gyrokinetic models or fully kinetic descriptions. Modern numerical tools such as multi-scale methods, structure-preserving schemes and isogeometric meshes, therefore, need to be adapted to plasma physics models in order to enhance state-of-the-art kinetic codes. At the same time, new programming models are necessary to prepare codes for the use on emerging heterogeneous HPC systems. This includes vectorization, cache efficient memory organization, task-based parallelism as well as new algorithms that are adapted to modern hardware. This minisymposium shall bring together scientists from physics, applied mathematics, and computer science to discuss current trends in the development of (gyro)kinetic codes.
The design of materials for energy production and storage is a subject of great scientific and technological interest and its potential impact on society is very great indeed. The study of such systems is however rather challenging since the systems are complicated. Modeling of such systems poses a challenge since one deals with systems in which reactions take place at surfaces in the presence of highly disordered environments. Car-Parrinello type simulations combined with ab-initio methods are of course needed. In this minisymposium we invite specialists to discuss the perculiar challenges in this field. The issues that we expect to cover will be oxidation processes in solution, electron transfer, morphology and chemistry at the interfaces.
The main objective of this symposium is to bring international scientists together working in the area of particle-based modeling with applications in Life Sciences, Fluids, and Materials. Numerical methods include but are not restricted to Coarse-Graining Molecular Dynamics (CG-MD), Dissipative Particle Dynamics (DPD), Smoothed Dissipative Particle Dynamics (SDPD), Smoothed Particle Hydrodynamics (SPH), Lattice-Boltzmann Method (LBM), Moving Particle Semi-Implicit Method (MPS), Brownian Dynamics (BD) and Stokesian Dynamics (SD). The goal of minisymposium is, on one hand, to share state-of-the-art results in various applications of particle-based methods and, on the other to discuss technical issues of the computational modeling.
Progress in weather and climate modeling is tightly linked to the increase in computing resources available for such models. Emerging heterogeneous high-performance architectures are a unique opportunity to address these requirements in an energy- and time-efficient manner. The hardware changes of emerging computing platforms are accompanied by dramatic changes in programming paradigms. These changes have only just started. Adapting current weather and climate codes to efficiently exploit such architectures requires an effort which is both costly and error-prone. The long software lifecycles of weather and climate codes renders the situation even more critical, as hardware life cycles are much shorter in comparison. Furthermore, atmospheric models are developed and used by a large variety of researchers on a myriad of computing platforms, which makes portability a crucial requirement in any kind of development. Developers of weather and climate models are struggling to achieve a better separation of concerns in order to separate the high-level specification of equations and solution algorithms from the hardware dependent, optimized low-level implementation. It is probable that the solutions to achieve this will be different for different parts of the codes, due to different predominant algorithmic motifs and data-structures. On the example of concrete porting efforts, this session will illustrate different approaches used today and (possibly) in the future.
The ability to computationally design, optimize, or understand the properties of energy relevant materials is fundamentally contingent on the existence of methods to accurately, efficiently and reliably simulate them. Quantum mechanics based approaches must necessarily serve as a foundational role, since only these approaches can describe matter in a truly first-principles (parameter-free) and therefore robust manner. Quantum Monte Carlo (QMC) methods are ideal candidates for this since they robustly deliver highly accurate calculations of complex materials, and with increased computer power provide systematically improvable accuracies that are not possible with other first principles methods. By directly solving the Schrödinger equation and by treating the electrons at a consistent many-body level, these methods can be applied to general elements and materials, and are unique in satisfying robust variational principles. More accurate solutions result in lower variational energies, enabling robust confidence intervals to be assigned to predictions. The stochastic nature of QMC facilitates mapping onto high-performance computing architectures. QMC is one of the few computational materials methods capable of fully exploiting today’s petaflop machines.
This symposium will present some of the latest developments on QMC methods, from an application and a development perspective.
Human fertility is based on physiological events like adequate follicle maturation, ovulation, ovum fertilization, corpus luteum formation as well as endometrial implantation, proceeding in a chronological order. Diseases such as endometriosis or the polycystic ovary syndrome seriously disturb menstrual cycle patterns, oocyte maturation and consequently fertility. Besides endocrine diseases, several environmental and lifestyle factors, especially smoking and obesity, also have a negative impact on fertility. Modern techniques in reproductive medicine like in-vitro fertilization or intracytoplasmic sperm injection have increased the chances for successful reproduction. However, current success rates vary significantly among clinics, still reaching only about 35% even in well-functioning centers. This is mainly due to the usage of different treatment protocols and limited knowledge about individual variability in the dynamics of reproductive processes.
This minisymposium brings together researchers with different scientific backgrounds (computer science, mathematics, medicine) who work on developing model-based clinical decision support systems for reproductive endocrinologists enabling the simulation and optimization of treatment strategies in-silico. Virtual physiological human (VPH) models together with patient specific parameterizations (Virtual Patients), formalized treatment strategies (Virtual Doctors) and software tools (Virtual Hospital) enable in silico clinical trials, that is clinical trials performed by means of computer simulations over a population of virtual patients. In silico clinical trials are recognized as a disruptive key innovation for medicine, as they allow medical scientists to reduce and postpone invasive, risky, costly and time-consuming in vivo experiments of new treatments to much later stages of the testing process, when a deeper knowledge of their effectiveness and side-effects has been acquired via simulations.
The talks in the minisymposium will highlight different aspects of a virtual hospital in reproductive medicine.
- Distributed service oriented systems for clinical decision support
- HPC within in silico clinical trials
- Construction of large virtual patient populations
- Formalization of treatment strategies in silico
- VPH model validation, treatment verification and the design of individualized protocols
- Databases and software for the virtual hospital
- Large scale integrated physiology models
Interfaces (solid-solid, solid-liquid, solid-gas as well as liquid-gas) give a variety of interesting and crucial functions in condensed-matter physics and chemistry. The space-charge layer plays an important role in semiconductor physics, leading to several fundamentals of electronic devices. On the other hand, the electrochemistry always takes into account the electric double layer, crucial for catalysis, solar cell and battery applications. These modulations of charge-carrier distributions can expand up to micrometer scale, though nanometer-scale modulation happens as well in several cases. Therefore, sole first-principles electronic structure calculation does not work well. For these issues, special techniques to deal with the interface are necessary, on top of the large-scale and long-time QM-based simulations. QM/MM techniques or combinations of QM and continuum or classical theories can be potential solutions. This minisymposium brings together cutting-edge researchers working on these issues to discuss and evaluate individual methods and suggest future directions. This is important since relationships among the methods for the interfaces are difficult to see. Besides, the material-science flavor of this minisymposium provides perspectives addressing computer scientists and applied mathematicians, which will encourage future developments of interdisciplinary techniques.
With a mass larger than that of the Sun compressed into an almost perfect sphere with a radius of only a dozen kilometers, neutron stars are the most compact material astrophysical objects we know. In their cores, particles are squeezed together more tightly than in atomic nuclei and no terrestrial experiment can reproduce the extreme physical conditions of density, temperature, and gravity. With such properties, it is clear that neutron stars in binary systems are unique laboratories to explore fundamental physics – such as the state of matter at nuclear densities – and fundamental astrophysics – such as the one behind the “central engine” in short gamma-ray bursts. Yet, such an exploration does not come easy. The nonlinear dynamics of binary neutron stars, which requires the combined solution of the Einstein equations together with those of relativistic hydrodynamics and magnetohydrodynamics, and the complex microphysics that accompanies the inspiral and merger, make sophisticated numerical simulations in three dimensions the only route for an accurate modeling.
This minisymposium will focus on the gravitational-wave emission during the inspiral and the connection between merging binaries and the corresponding electromagnetic counterpart. These two problems require urgent attention as they are both likely to play an important role in the imminent detection of gravitational waves from binary neutron stars by interferometric detectors such as LIGO and Virgo.
The computation of bulks of inner eigenpairs of large sparse matrices is known to be both an algorithmic challenge and resource intensive in terms of compute power. As the compute capabilities have continuously increased over the past decades, computational models or applications requiring information about inner eigenstates of sparse matrices have become numerically accessible in many research fields. At the same time new algorithms (e.g. FEAST or SSM) have been introduced and long-standing methods such as filter diagonalization are still being applied, improved and extended. However, the trend towards highly parallel (heterogeneous) compute systems is challenging the efficiency of existing solver packages as well as building block libraries and calls for new massively parallel solvers with high hardware efficiency across different architectures. Thus, substantial effort is put into the implementation of new sparse (eigen)solver frameworks which face challenges both in terms easy to use, extendibility, sustainability, and hardware efficiency. Software engineering and holistic performance engineering concepts are deployed to address these challenges. The significant momentum in the application fields, numerical methods, and software layers calls for a strong interaction between scientists involved in those activities to provide sustainable and hardware efficient frameworks to compute inner eigenvalues of large sparse matrices. The minisymposium offers a platform to bring together leading experts in this field to discuss recent developments at all levels: from the application down to hardware efficient implementations of basic kernel operations. Application experts will present their current and upcoming research fields requiring the computation of inner eigenvalues. State-of-the-art eigensolvers and new algorithmic developments will be discussed along with challenges faced by library developers in terms of software sustainability and hardware efficiency. Many of these topics are not limited to the inner sparse eigenvalue problems but are of general interest for sparse linear algebra algorithms for current and future HPC architectures.
Part 2 of the minisymposium focuses on algorithms as well as software and performance aspects.
Modeling and simulation of problems in cardiovascular mechanics can contribute significantly to the development of the field of precision medicine for cardiovascular and systemic phenomena. The relevant models of the related multiphysics problems can only be numerically simulated through the efficient use of modern techniques from computational mathematics, mechanics, and high-performance computing. Problems addressed in this minisymposium include reentry dynamics in cardiac electromechanical models, early atherosclerosis progression, and fluid-structure interaction using realistic arterial wall material models. This minisymposium aims at gathering researchers and experts in computational modeling and simulation of the heart and the systemic circulation.
Kinetic simulations play an essential role towards understanding the dynamics of plasmas in the fields of nuclear fusion, laser plasma interaction, and astrophysics. The complexity of kinetic computations, in particular, their high dimensionality and multi-scale nature, lead to exciting challenges in physics, applied mathematics, and computer science. For example, modeling the plasma dynamics close to the edge of magnetic fusion devices requires codes which can flexibly handle complex geometries and implement enhanced gyrokinetic models or fully kinetic descriptions. Modern numerical tools such as multi-scale methods, structure-preserving schemes, and isogeometric meshes, therefore, need to be adapted to plasma physics models in order to enhance state-of-the-art kinetic codes. At the same time, new programming models are necessary to prepare codes for the use on emerging heterogeneous HPC systems. This includes vectorization, cache efficient memory organization, task-based parallelism as well as new algorithms that are adapted to modern hardware. This minisymposium shall bring together scientists from physics, applied mathematics, and computer science to discuss current trends in the development of (gyro) kinetic codes.
This minisymposium will bring together researchers who use molecular simulation in their respected fields, in order to discuss recent advances and to exchange experiences and ideas. The focus of this minisymposium is the analysis of very huge biological and chemical data sets arising from the simulation of complex molecular systems, by means of developing efficient algorithms and their implementation on high-performance supercomputers.
This approach is necessary for designing smart drug-like molecules for Precision Medicine. The tools include, but are not limited to, algebraic stochastic dimension reduction methods such as nonnegative matrix decomposition for very large data sets obtained from atomic spectroscopy, Markov State Models (MSMs), Multiscale Methods in Time and Space for studying molecular conformation, PDEs for the analysis of multivalent binding kinetics for biochemical systems, and spectral clustering.
The main objective of this symposium is to bring international scientists together working in the area of particle-based modeling with applications in Life Sciences, Fluids, and Materials. Numerical methods include but are not restricted to Coarse-Graining Molecular Dynamics (CG-MD), Dissipative Particle Dynamics (DPD), Smoothed Dissipative Particle Dynamics (SDPD), Smoothed Particle Hydrodynamics (SPH), Lattice-Boltzmann Method (LBM), Moving Particle Semi-Implicit Method (MPS), Brownian Dynamics (BD) and Stokesian Dynamics (SD). The goal of minisymposium is, on one hand, to share state-of-the-art results in various applications of particle-based methods and, on the other to discuss technical issues of the computational modeling.
This minisymposium lies at the interface of computer science and applied mathematics, presenting recent advances in methods, ideas and algorithms addressing resilience for extreme-scale computing.
Extreme scale systems are expected to exhibit more frequent system faults due to both hardware and software, making resilience a key problem to face. On the hardware side, challenges will arise due to the expected increase in the number of components, variable operational modes (e.g. lower voltage to address energy requirements), and increasing complexity (e.g. memory hierarchies, heterogeneous cores, more, smaller transistors). The software stack will need to keep up with the increasing hardware complexity, hence becoming itself more error-prone.
In general, we can distinguish between three main categories of faults, namely hard (where a hardware component fails and needs to be fixed/replaced), soft/transient (a fault occurs, but is corrected by the hardware or low-level system software), and silent/undetectable (an error occurs but cannot be detected and fixed). The first two categories have a well-defined impact on the run and the system itself. The third class is more subtle because its effect is simply to alter stored, transmitted, or processed information, and there is no opportunity for an application to directly recover from a fault. This can lead to noticeable impacts such as crashes and hangs, as well as corrupted results.
Current systems do not have an integrated approach to fault tolerance, namely the various subsystems have their own mechanisms for error detection and recovery (e.g. ECC memory). Also, there is no good error isolation, e.g. the failure of any component in a parallel job generally causes the entire job to fail. In fact, the current standard of the Message Passing Interface (MPI) system does not support failing ranks. Common approaches for fault-tolerance include hardware-level redundancy, algorithmic error correction code and checkpoint/restart. The latter is currently the most widely used approach. However, the tight power budget targeted for future systems and the expected shortening of the mean-time-between-failures (MTBF) may cause it to become an unfeasible solution for extreme-scale computing.
It is increasingly more recognized that hardware-only resilience will likely become unfeasible in the long term. This sets the need to develop an integrated approach, where resilience is tackled across all layers to mitigate the impact of faults in a holistic fashion, keeping under consideration its interplay with the energy budget. Hence, in parallel to the continuous effort aimed at improving resilience for hardware and system software, new approaches/ideas need to be incorporated at the highest level, i.e. algorithms and applications, to account for potential faults, e.g. silent data corruptions (SDCs). In other words, algorithms themselves need to be made more robust and resilient.
This minisymposium explores HPC resilience in the context of algorithms, applications, hardware, system and runtimes. Specifically, the talks have been selected to cover topics ranging, e.g., from solvers, programming models and energy-aware computing, to approximate computing, memory vulnerability and post-Moore’s.
PNL02 Sustainable Software Development and Publication Practices in the Computational Sciences
The goal of the PASC papers program is to advance the quality of formal scientific communication between the relative disciplines of computational science and engineering. The program was built from an observation that the computer science community traditionally publishes in the proceedings of major, international conferences, while the domain science community generally publishes in discipline-specific journals – and cross readership is very limited. The aim of our initiative is to build and sustain a platform that enables engagement between the computer science, applied mathematics and domain science communities, through a combination of conference participation, conference papers and post-conference journal publications. The PASC papers initiative allows authors to benefit from the interdisciplinarity and rapid dissemination of results afforded by the conference venue, as well as from the impact associated with subsequent publication in a high-quality scientific journal. To help facilitate such journal publication, PASC has recently formed collaborative partnerships with a number of scientific journals, including Computer Physics Communications (CPC), the Journal of Advances in Modeling Earth Systems (JAMES), and ACM Transactions on Mathematical Software (ACM TOMS). In this panel discussion, representatives from these journals are invited to express their thoughts regarding publication practices in the computational sciences, including the publication of software codes. We will discuss best practices for sustainable software development and address questions such as: how can we ensure that code and infrastructure will still be there in ten-plus years? How can we validate published results and guarantee reproducibility? Finally, we will describe our vision for the PASC papers initiative going forward.
Panelists:
- Thomas Schulthess (CSCS / ETH Zurich, Switzerland)
- Walter Dehnen (University of Leicester): Editor (Astronomy and Astrophysics) for Computer Physics Communications (CPC)
- Robert Pincus (University of Colorado): Editor in Chief of the Journal of Advances in Modeling Earth Systems (JAMES)
- Michael A. Heroux (Sandia National Laboratories): Associate Editor (Replicated Computational Results) for ACM Transactions on Mathematical Software (TOMS)
Large-scale computational simulations have become a key tool in fluid-structure interaction research. The investigation of fundamental flow principles and the interaction of blood cells with surrounding plasma have necessitated the use of high-performance computing and high-fidelity computational methods. The need for massively parallel simulation of particle-laden fluids has driven the development of novel multiscale coupling techniques to enable high-resolution FSI modeling in complex topologies. This symposium will bring together developers of state-of-the-art multiscale and multiphysics models of hemodynamics with a particular emphasis on rheology and transport phenomena. The set of talks will showcase a range of techniques used to simulate different cell-types in an extensible and scalable manner while highlighting recent advances in computational hemodynamics. This symposium will provide a platform to identify cross-cutting challenges and opportunities for future research.
Precision medicine is an emerging approach for disease treatment and prevention that takes into account individual variability in genes, environment, and lifestyle for each person. Although the idea has been a part of healthcare for many years (e.g. blood transfusions), research in precision medicine has spurred a lot of interest recently due to the accessibility to large volumes of complex genomics and other biomedical datasets as well as digitized medical records and the development of novel methods and tools in data science. Sound, interoperable high-performance computing and “Big Data” analytics and management infrastructures are key to the success of research programs such as the “Swiss Personalized Health Network” (SPHN) initiative starting in 2017. These infrastructures will have to be built in collaboration and coordination between hospitals and universities to allow researchers to perform biomedical research on real patient data beyond institutional and geographical boundaries. A particular challenge of this novel infrastructure is the combination of high-performance and Big Data storage and computing resources and data management services with high data security and compliance requirements, so it can fulfill the regulations of the respective federal laws (in Switzerland, particularly the Human Research Act (Humanforschungsgesetz, HFG)) and international best practices in the field.
This minisymposium aims at bringing together experts from biomedical research, hospital and university computing and data service providers, as well as from the side of Ethical, Legal and Social Implications (ELSI), and will put the subject into the larger perspective of the upcoming Swiss national research initiative on personalized health care.
Recent years have seen a dramatic explosion in an amount and precision of available raw data. Large amounts of measured and simulated information from all kinds of processes have been accumulated in a wide range of areas - from weather and climate research to astrophysics and neuroscience. If knowledge about such systems is present only in the form of observations or measurement data, the challenging problem of understanding the system becomes a problem of pattern recognition and model reduction in multiple dimensions. Optimization methods have manifested themselves as a central pillar for practical implementations of these data analysis problems, allowing a unified handling of a wide variety of data analysis algorithms. These include clustering methods (standard K-means, fuzzy-C-means, or Fuzzy Clustering based on Regression Models (FCRM)) as well as of the more advanced methods based on concepts from Artificial Neuronal Networks (ANNs) and nonparametric/nonstationary data analysis.
A central challenge in solving such data analysis problems lies in not imposing too many – potentially inappropriate – a priori assumptions on the available data. Also in this respect recent advances in high-performance optimisation methods can assist development of data analysis methods, through appropriate regularisation concepts and related tools. Recent progress in GPU-based high-performance computing implementations of optimisation methods enables us now to apply these techniques to ever larger problems, for instance from causality inference, image denoising, data compression, market phases identification in finance - among many others.
In this minisymposium we will discuss the current state of the art in the optimisation-driven multidimensional data analysis problems, understand their parallel programming issues – particularly in a view of the emergence of disruptive processor technology, such as clusters of Graphics Processing Units (GPUs) and other accelerators (e.g., Intel Xeon Phi) – and hear about their application to various data analysis problems from different areas. Also recent work on the development of community libraries for HPC optimisation software will be presented.
Understanding the basic building blocks of matter is amongst the most formidable ventures humanity has undertaken. The Large Hadron Collider (LHC) and its associated experiments have allowed us to observe the particles and processes that lie at the foundation of our current understanding of the physical world. Furthermore, the LHC and other high-energy physics (HEP) facilities continue to produce data at an ever increasing rate, allowing us to peer beyond the Standard Model.
In order to filter, process, and analyze the data captured at particle detectors, the HEP community has had over the last decades an insatiable appetite for computing power. Today the LHC experiments record around 150 PB/year. The rate of interactions to be studied will increase by a factor of 100 in the next 10 to 15 years.
Computing at the LHC experiments happens mostly in two domains. In Online Computing, data captured at the detector must be processed and filtered using near-realtime, high-throughput computing software frameworks. The reconstruction of a particle collision event employs a large number of complex algorithms before the results are stored for further analysis. Offline computing deals with the physics analysis of the large data sets captured by the detector and retained by the online computing software.
In this session we take a closer look at detector simulation and data analysis in HEP experiments. Much work has been devoted in recent years to further develop the existing software frameworks to better take advantage of modern hardware architectures. This includes shared-memory parallelism, vectorization and support for coprocessors and accelerators. Furthermore, the recent advancements in the field of machine learning are finding a number of applications in high-energy physics. The presentations discuss these simulation and data-analysis frameworks in the context of high-performance computing and modern hardware architectures.
The elucidation of biological processes passes through different experimental protocols that are used to dissect intricate reactions in several small pieces, thus obtaining simplified models of the entire process. Computer simulations have been used since the early 70s to study the physical and chemical properties of biomolecules and they have played in the recent years an even more relevant role in elucidating, complementing and also predicting the experimental observables. The chemical reactivity of biomolecules can be studied in-silico at different levels of accuracy and dimension using electron-based, atom-based, and multiscale models. The choice among the different approaches depends on the property of the system the scientists aim to investigate. Electron transfer reactions, ligand/protein binding, and protein-protein interactions are only a few examples of the biological processes that can be investigated through computer simulations.
The main limitations of the computational approaches are represented by:
- Timescale of the biological processes;
- Size of the system to simulate;
- Accuracy in the system’s description;
The continuous effort of the scientific community to overcome such limitations has led to important advances represented by the development of novel methods and the improved performance of the simulation codes on the modern-day computer architectures such as high-performance computing (HPC) clusters. The scientific literature shows many successful examples where simulations unravel complex biochemical problems and some of those are illustrated in the present minisymposium.
The minisymposium also represents an opportunity, particularly for young researchers, to discuss with some of the word-leading scientists in the field the state-of-the-art simulation techniques and draw a line towards the future of biomolecular simulations.
Computational nanoelectronics is an emerging scientific area that leads to several challenging mathematical problems such as the non-equilibrium Green's function formalism, density functional theory, covariance matrix analysis in uncertainty quantification or dynamic mean field theory, just to mention some of the topics. These aspects require high-performance computing methods just because of their mathematical and algorithmic complexity. Methods for high-performance computing include solving large-scale eigenvalue problems, the selective inversion of parts of a large-scale matrix, solving several linear systems and many more. This minisymposium will engage in research and application software development for extreme-scale numerical linear algebra targeted at classes of problems that require modern numerical methods and high computer performance.
Active Matter is an emerging field of Physics that studies a variety of systems composed of entities that can actively consume energy to convert it into motion. Biological examples of active matter range in scale by many orders of magnitude from swarms of motile bacteria to schools of fish. They can together exhibit such phenomena as collective motion and dynamic self-organization. Understanding the basic principles governing inherently non-equilibrium active matter is a difficult challenge. One of the approaches to achieve this goal is to create synthetic systems that can reproduce behaviors typical for living matter, which can be studied to understand complex phenomena emerging in active matter. Computational modeling plays a critical role in this process enabling both better understanding of biological systems and devising synthetic systems that can be then experimentally tested. This minisymposium will discuss recent computational advances and challenges in the field of Active Matter.
Astrophysical flows are a great challenge for today's simulation community. Astrophysical disks in particular exhibit large Mach numbers, turbulence, ionization, shock waves and are subject to many nonlinear instabilities. Compared to geophysical and laboratory flows, a lot of improvements have been made during the last decades to the numerical methods used in this field. Finite volume methods, mesh refinement, moving mesh and other recent developments such as Lagrangian methods that improve on conventional particle-based methods (SPH) are used with efficiency; today's large supercomputers also allow one to run long term 3D calculations that were unfeasible ten years ago. The results of these simulations give a new picture of the dynamics of these disks, and contribute significantly to the astrophysics. Planet formation models are improved, disk observations can be predicted and understood, new instabilities are discovered, and much more will be done in the coming years.
We present in this minisymposium some recent developments in the field, presented by code developers and expert users. We will focus in particular on: (i) different state-of-the-art solvers for the fluid equations of a single fluid component and (ii) methods that handle the solution for coupled two or more fluids to model dust species and gas simultaneously. The efforts and interest in the field make it now possible to perform astrophysical disk simulations on almost 100 thousand CPU cores as well as on GPU clusters. It is becoming a new area of HPC in astrophysics.
Current and future computing systems are becoming increasingly unreliable and unpredictable: It is expected that at scale, errors will become the rule rather than the exception, and already today, severe performance fluctuations can be observed. In addition, the heterogeneity of the hardware mandates robustness, asynchrony, and communication-avoiding to be built directly into the methods, in order to achieve close to peak performance. Consequently, reliability and robustness must be built directly into scientific computing applications and numerical algorithms. In this minisymposium, we discuss the state of the art in fault-tolerant, communication-avoiding and asynchronous methods, focusing on, but not necessarily limiting the scope to, iterative solvers. The invited talks emphasize proven or provable novel algorithms and mathematical techniques beyond redundancy. Recent developments towards interactions with middleware and operating systems, as well as analytical or parameterised models, will also be highlighted. This workshop, a synthesis of the state of the field, will be accessible to non-experts and experts alike.
First FoMICS Student Prize in Computational Science and Engineering
The Swiss Graduate Program FoMICS "Foundations in Mathematics and Informatics for Computer Simulations in Science and Engineering", led by the Institute of Computational Science (ICS) at the Università della Svizzera italiana in Lugano, is pleased to announce the first FoMICS prize for PhD students. In this session, selected students will have the opportunity to present their doctoral research in a short talk (10-15 min). The prize will be awarded based on the quality of the results and the ability to communicate them to the audience. The award will be presented at the closing session of PASC17.
The predictive power of fluid-structure interaction methods together with the growing computational power of massively parallel supercomputing architectures have led to key scientific and technological advances in patient-specific hemodynamic modeling. The increase in computational power alongside improved numerical techniques open up the possibility to simulate and predict behavior from the cellular to systemic level over larger temporal domains. The aim of the present minisymposium is to gather experts in the computational hemodynamics community to discuss the challenges in algorithmic development as well as porting, scaling, and optimizing large-scale blood flow models for leadership class systems. The presentations will focus on both the discussion of findings and the lessons learned regarding effective use of next generation architectures for the advancement of such biomedical applications.
Numerical weather prediction and climate modeling are highly dependent on the available computing power in terms of the achievable spatial resolution, the number of members run in ensemble simulations as well as the completeness of physical processes that can be represented. Both domains are also highly dependent on the ability to produce, store and analyze large amounts of simulated data, often with time constraints from operational schedules or international coordinated experiments. The ever increasing complexity of both numerical models and high-performance computing (HPC) systems has led to the situation that today, one major limiting factor is no longer the theoretical peak performance of available HPC systems, but the relatively low sustained efficiency that can be obtained with complex numerical models of the Earth system.
The differences in model complexity, as well as the temporal and spatial scales that were historically characteristic for climate and weather modeling, are vanishing since both applications ultimately require complex Earth system modeling capabilities which resolve the same physical process detail across atmosphere, ocean, cryosphere and biosphere. With increasing compute power and data handling needs, both communities must exploit synergies to tackle common scientific and technical challenges.
This minisymposium will focus on joint climate and weather community engagement in cutting edge high-resolution modeling for research and service provision.
Understanding the basic building blocks of matter is amongst the most formidable ventures humanity has undertaken. The Large Hadron Collider (LHC) and its associated experiments have allowed us to observe the particles and processes that lie at the foundation of our current understanding of the physical world. Furthermore, the LHC and other high-energy physics (HEP) facilities continue to produce data at an ever increasing rate, allowing us to peer beyond the Standard Model.
In order to filter, process, and analyze the data captured at particle detectors, the HEP community has had over the last decades an insatiable appetite for computing power. Today the LHC experiments record around 150 PB/year. The rate of interactions to be studied will increase by a factor of 100 in the next 10 to 15 years.
Computing at the LHC experiments happens mostly in two domains. In Online Computing, data captured at the detector must be processed and filtered using near-realtime, high-throughput computing software frameworks. The reconstruction of a particle collision event employs a large number of complex algorithms before the results are stored for further analysis. Offline computing deals with the physics analysis of the large data sets captured by the detector and retained by the online computing software.
In this session we take a closer look at detector simulation and data analysis in HEP experiments. Much work has been devoted in recent years to further develop the existing software frameworks to better take advantage of modern hardware architectures. This includes shared-memory parallelism, vectorization and support for coprocessors and accelerators. Furthermore, the recent advancements in the field of machine learning are finding a number of applications in high-energy physics. The presentations discuss these simulation and data-analysis frameworks in the context of high-performance computing and modern hardware architectures.
Fluorescence-mediated tomography (FMT) is an optical imaging technique to access the three-dimensional distribution of a fluorescent in a few centimeters depth. The main application is for preclinical drug development, but the technology is also potentially applicable for human hand and breast imaging. In FMT, a light source shines onto the object and the light propagates into the object, where it scatters and is absorbed. It triggers a fluorescent material, which emits light at a different wavelength. The remaining light and the emitted light can be measured on the other side of the object. From the measurements, the fluorescence distribution should be reconstructed as accurately as possible. This is not only a relevant medical problem but also leads to a current topic of mathematical research. One important aspect is an accurate optical model containing information about the shape and heterogeneous scattering and absorption maps. Mathematically, the process can be described by the Boltzmann transport equation, which itself is expensive to solve. Furthermore, due to the high scattering in the near-infrared wavelengths, the inverse problem of fluorescence reconstruction is a mathematically and computationally challenging problem, requiring HPC.
The purpose of this minisymposium is to report on the continuing progress on mathematical and computational methods that aim at an improvement of the quality of the FMT image reconstruction. It brings together researchers from applied mathematics, computational science, as well as industry to discuss their work and exchange ideas.
Current and future computing systems are becoming increasingly unreliable and unpredictable: It is expected that at scale, errors will become the rule rather than the exception, and already today, severe performance fluctuations can be observed. In addition, the heterogeneity of the hardware mandates robustness, asynchrony, and communication-avoiding to be built directly into the methods, in order to achieve close to peak performance. Consequently, reliability and robustness must be built directly into scientific computing applications and numerical algorithms. In this minisymposium, we discuss the state of the art in fault-tolerant, communication-avoiding and asynchronous methods, focusing on, but not necessarily limiting the scope to, iterative solvers. The invited talks emphasize proven or provable novel algorithms and mathematical techniques beyond redundancy. Recent developments towards interactions with middleware and operating systems, as well as analytical or parameterised models, will also be highlighted. This workshop, a synthesis of the state of the field, will be accessible to non-experts and experts alike.
Although it seems difficult for any hardware to reach real exaflops at the moment, developments of first-principles calculation codes in this direction are still very important in the community of materials science, chemistry, and condensed matter physics. The Swiss National Computing Centre, which has the world No.8 Piz Daint (9.7 PFlops), announced that it will adopt NVIDIA Pascal GPU in conjunction with Intel Haswell CPU. On the other hand, in Japan, the Joint Center for Advanced High-Performance Computing and FUJITSU launched world No.6 Oakforest-PACS (13.5 PFlops), consisting of Intel Xeon Phi, and RIKEN, the centre of world No.7 K-computer (10.5 PFlops) by FUJITSU, announced that it will adopt ARM processors for the Post-K exascale supercomputer. Namely, there are two different directions: GPU-CPU and many-cores. In this trend, first-principle (mainly DFT) calculations codes need to be adjusted to either or both in the coding stage or the methodology. This minisymposium is designed to bring developers facing these issues in Switzerland and Japan together, and discuss the future directions with an audience from domain science and computer science.
In the early days of HPC, a debate raged between the Lagrangian and Eulerian approaches to hydrodynamic simulations. In 1D, there was no contest: Lagrangian hydrodynamics was the clear winner. This approach is still seen today in stellar evolution codes, where the immense time spans that must be covered forces us to traverse at least the great bulk of that time span in 1D. In the 1960s and 70s, the debate moved to 2D. Lagrangian grids tangle, which gave rise to various mitigating techniques, such as slide lines and “free Lagrange” or “continuous rezoning” approaches. Eulerian grids cannot easily capture critical flow features, such as multimaterial surfaces, which gave rise to mitigating techniques, such as volume-of-fluid and front tracking approaches. In the 1980s, the seeds of modern SPH (smooth particle hydrodynamics) and AMR (adaptive mesh refinement) were planted in the Lagrangian and Eulerian camps, respectively. The simulation community then moved to 3D, where these new variants now play very important roles. In this minisymposium, we will explore the present state of the art on both the Lagrangian and Eulerian sides. For example, can Riemann problems play a similar role for improving shock capturing in Lagrangian codes that they play in Eulerian ones? Or viewing the debate from another angle, how dynamic can an AMR grid actually be at an affordable overhead cost? A potential outcome of the discussion in this minisymposium could be a small selection of test problems that could, in principle, be addressed by both approaches. Problems along the lines of “I bet you can’t do this” would be fun, but only if they actually could be set up in the opposite approach without huge investments of time and effort. Problems that have been addressed in both ways in recent years that would be of interest to hear about in their present states of the art include: the common envelope star problem, formation of galaxies or other large structures in cosmology, the star formation and fragmentation, or planet formation, problem, or any of a variety of problems involving unstable multifluid interfaces and the development of turbulence. Contributions showing the state of the art in these domains will be presented from the Lagrangian and Eulerian perspectives. Contributors will demonstrate the potential for their favorite problems to become selected tests that could be used by the community in advancing this debate, and with that the level of technical proficiency on each side. This minisymposium is viewed as a mechanism to begin this debate afresh from a modern perspective. The selected presenters will engage the attendees, particularly with a goal of establishing a forum and a set of agreed upon test problems.