Back

Minisymposium

MS2A - Sustainable Scientific Computing

Fully booked
Monday, June 16, 2025
14:30
-
16:30
CEST
Room 5.0A52
Join session

Live streaming

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Session Chair

Description

In scientific computing and beyond, we are all used to optimizing time-to-solution in our computations and to use computers to their fullest power. Yet, the world is facing a climate crisis and we must all strive to minimize our energy footprint. The power consumption constraint for large-scale computing (aka exascale computing limited by 20MW) encourages scientists to revise the architectural design of hardware but also of applications, their underlying algorithms, as well as the working/ storage precision. By improving energy efficiency, we contribute to sustainable scientific computing. Developing energy-efficient scientific computing applications is nontrivial and requires expertise in several different areas: algorithms and applications, programming languages and compilers, numerical verification, and computer architecture. In this minisymposium, we aim to discuss challenges in designing and implementing energy-efficient mixed-precision algorithms with the assistance of computer arithmetic tools, energy efficiency modeling, hardware-software co-design in light of the end of Moore's law, and potential for automating energy-saving techniques. Key topics include establishing energy-to-solution as a HPC metric following our initial attempt https://zenodo.org/records/13306639, exploring energy optimization opportunities from algorithmic derivation and across computing stacks, and enhancing tools to aid energy-efficient software development.

Presentations

14:30
-
15:00
CEST
Numerical Optimization Targeting Energy-Efficient Scientific Computing

Mixed-precision computing has the potential to significantly reduce the cost of exascale computations, but determining when and how to implement it in programs can be challenging.We propose a methodology for enabling mixed-precision with the help of computer arithmetic tools, roofline model, and computer arithmetic techniques. As case studies, we consider Nekbone, a mini-application for the Computational Fluid Dynamics (CFD) solver Nek5000, and a modern Neko CFD application. With the help of the VerifiCarlo tool and computer arithmetic techniques, we introduce a strategy to address stagnation issues in the preconditioned Conjugate Gradient method in Nekbone and apply these insights to implement a mixed-precision version of Neko. We evaluate the derived mixed-precision versions of these codes by combining metrics in three dimensions: accuracy, time-to-solution, and energy-to-solution. Notably, mixed-precision in Nekbone reduces time-to-solution by roughly 38% and energy-to-solution by 2.8x on MareNostrum 5, while in the real-world Neko application the gain is up to 29% in time and up to 24% in energy, without sacrificing the accuracy.

Roman Iakymchuk (Uppsala University, Umeå University)
15:00
-
15:30
CEST
Sustainable Supercomputing: An Overview of Activities at EPCC

Improving the environmental sustainability of scientific computing has been a key concern for EPCC, the supercomputing centre at the University of Edinburgh, for many years. This talk will give an overview of the operational measures that have been implemented in our data centre and on our systems, and the impact they have had. The talk will also present our research activities around sustainability, including topics such as system utilisation, scheduling, and heat storage.

Michele Weiland (EPCC, The University of Edinburgh)
15:30
-
16:00
CEST
Exploring Numerical Accuracy and Mixed-Precision with Verificarlo and Stochastic Rounding

Reducing the energy cost of computer simulations is critical. While numerical precision must be sufficient to yield reliable scientific insights, lower precision can significantly reduce energy consumption and computation time. We present Verificarlo (https://github.com/verificarlo/verificarlo), an open-source, LLVM-based framework for verifying and optimizing numerical accuracy in complex programs. Verificarlo integrates multiple floating-point backends that simulate numerical errors and the effects of lower precision arithmetic, including MCA / Stochastic Rounding. Before reducing precision, it is essential to ensure that simulations are numerically robust. Verificarlo employs alternative floating-point models to detect subtle numerical bugs and defines the number of significant digits probabilistically to assess computational accuracy. Its variable precision backend enables a thorough exploration of the trade-off between precision and performance, identifying code regions that can safely operate with smaller floating-point formats without compromising reproducibility. Verificarlo has been successfully applied in HPC applications such as neuroimaging pipelines, DFT quantum mechanical modeling, and structure simulations.

Pablo de Oliveira Castro (UVSQ, Université Paris-Saclay)
16:00
-
16:30
CEST
Probabilistic Error Analysis of Limited-Precision Stochastic Rounding

Classical probabilistic rounding error analysis is well suited to stochastic rounding (SR), yielding strong results for floating-point algorithms relying on summation. For many numerical linear algebra algorithms, one can prove probabilistic error bounds that grow as $\mathcal{O}(\sqrt{n}u)$, where $n$ is the problem size and $u$ is the unit roundoff. These bounds are asymptotically tighter than the worst-case ones, which grow as $\mathcal{O}(nu)$. For certain algorithms, SR is unbiased. However, all these results were derived under the assumption that SR is implemented exactly, which requires a number of random bits that is too large to be suitable for practical implementations. We investigate the effect of using $r$ random bits in probabilistic SR error analysis. To this end, we introduce a new rounding mode, limited-precision SR. Considering the $r$ used, this new rounding mode accurately matches hardware implementations, unlike the ideal SR generally used in the literature. We show that this new rounding mode is biased and that the bias is a function of $r$. As $r$ approaches infinity, however, the bias disappears, and limited-precision SR converges to the ideal SR. We develop a novel model for probabilistic error analysis of algorithms employing SR. Several numerical examples corroborate our theoretical findings.

El-Mehdi El arar (University of Rennes, INRIA)