Beyond Moore's Law

Chair: John Shalf (Lawrence Berkeley National Laboratory, USA)


By most accounts, we are nearing the limits of conventional photolithography processes.  It will be challenging to continue to shrink feature sizes smaller than 5nm and still realize any performance improvement for digital electronics in silicon.  At the current rate of development, the purported “End of Moore’s Law” will be reached in the middle to end of next decade.

Shrinking the feature sizes of wires and transistors has been the driver for Moore’s Law for the past 5 decades, but what might lie beyond the end of current lithographic roadmaps and how will it affect computing as we know it? Moore’s Law is an economic theory after all, and any option that can make future computing more capable each new generation (by some measure) could continue Moore’s economic theory well into the future.

The goal of this panel session is to communicate the options for extending computing beyond the end of our current silicon lithography roadmaps. The correct answers may be found in new ways to extend digital electronics efficiency or capability, or even new models of computation such as neuromorphic and quantum.

Panel presentations

To move beyond current technologies, design of both silicon-based and functional materials requires material control on the level of individual atomic configurations and single atom positions. This is also important for understanding and harnessing energy where disruptive improvements in efficiencies are required. Achieving this ultimate limit in science and technology of material and device performance – from quantum computing to nonvolatile memories, and from thermoelectrics to superconductors – requires design and control of matter with atomic, molecular, and mesoscale fidelity by developing precision synthesis tools. Here I will discuss development of pathways to direct matter in a scalable fashion to fabricate/design three-dimensional structures of a variety of materials with atom-by-atom and defect control of their shape and composition.

The development of quantum mechanics gave not only a new understanding of the physical world but also a new paradigm for computing. A quantum computer harnesses quantum effects to achieve a speedup against classical computers. This panel will discuss the future application fields of quantum computers and remaining challenges for building a large scale quantum computer. Arguably the most prominent example is Shor’s algorithm which on a quantum computer is exponentially faster than the best known classical algorithm for factoring integers. Much more interestingly, quantum computers can exponentially speedup calculations in quantum chemistry and materials science, which occupy a large fraction of today’s supercomputers. Already a medium sized quantum computer will be able to reach beyond classical exascale machines for such applications. Further potential application areas are the efficient solution of linear systems and accelerating stochastic simulations.

The brain is characterized by extreme power efficiency, fault tolerance, compactness and the ability to learn. It can make predictions from noisy and unexpected input data. Any artificial system implementing all or some of those features is likely to have a large impact on the way we process information. With the increasingly detailed data from neuroscience and the availability of advanced VLSI process nodes the dream of building physical models of neural circuits on a meaningful scale of complexity is coming closer to realization. Such models deviate strongly from classical processor-memory based numerical machines as the two functions merge into a massively parallel network of almost identical cells. Neuromorphic architectures are considered an attractive approach to implement cognitive computing in hardware. Recent implementations of deep learning with convolutional neural networks on traditional architectures have shown remarkable results which have led to a debate on whether more biological detail like spike communication would be useful or just a burden.

This panel will discuss this open question and argue for a systematic approach towards neuromorphic architectures with an optimized degree of biological realism. This panel will introduce current projects worldwide and the approach proposed by the EU Human Brain Project to establish a systematic path from biological data to brain derived neuromorphic machines with a very high degree of configurability.



Sustainable Software Development and Publication Practices in the Computational Sciences

Chair: Jack Wells (Oak Ridge National Laboratory, USA)


The goal of the PASC papers program is to advance the quality of formal scientific communication between the relative disciplines of computational science and engineering. The program was built from an observation that the computer science community traditionally publishes in the proceedings of major, international conferences, while the domain science community generally publishes in discipline-specific journals – and cross readership is very limited. The aim of our initiative is to build and sustain a platform that enables engagement between the computer science, applied mathematics and domain science communities, through a combination of conference participation, conference papers and post-conference journal publications. The PASC papers initiative allows authors to benefit from the interdisciplinarity and rapid dissemination of results afforded by the conference venue, as well as from the impact associated with subsequent publication in a high-quality scientific journal. To help facilitate such journal publication, PASC has recently formed collaborative partnerships with a number of scientific journals, including Computer Physics Communications (CPC), the Journal of Advances in Modeling Earth Systems (JAMES), and ACM Transactions on Mathematical Software (ACM TOMS). In this panel discussion, representatives from these journals are invited to express their thoughts regarding publication practices in the computational sciences, including the publication of software codes. We will discuss best practices for sustainable software development and address questions such as: how can we ensure that code and infrastructure will still be there in ten-plus years? How can we validate published results and guarantee reproducibility? Finally, we will describe our vision for the PASC papers initiative going forward.


  • Thomas Schulthess (CSCS / ETH Zurich, Switzerland)
  • Walter Dehnen (University of Leicester, UK): Editor (Astronomy and Astrophysics) for Computer Physics Communications (CPC)
  • Robert Pincus (University of Colorado, USA): Editor in Chief of the Journal of Advances in Modeling Earth Systems (JAMES)
  • Michael A. Heroux (Sandia National Laboratories, USA): Associate Editor (Replicated Computational Results) for ACM Transactions on Mathematical Software (TOMS)