Minisymposium
MS5A - Breaking the HPC Silos for Sustainable Development
Live streaming
Session Chair
Description
IDEAS4HPC is proposing its a panel of four presentations under the topics of "Breaking the HPC silos for sustainable development". Each presentation will describe the breadth and depth of progress obtained working at the intersection between different disciplines, or with team members of diverse origins and backgrounds. The early stage researcher is an Iranian computer scientist studying in Spain, and will share about her experience as summer student at CERN in the ROOT team. The second scientist works presently as scientific officer for the World Climate programme at the WMO in Geneva, and is an expert of sea level research. This topic has become of pivotal importance for climate change adaptation as a large proportion of the world population lives in coastal areas and sees its habitat and revenue earning pattern dramatically changed.
Presentations
My work in High-Performance Computing (HPC) has been at the center of my academic life, crossing over several disciplines such as medical data analysis, physics, etc. At CERN OpenLab, I was a member of the ROOT team, where I was involved in HPC-based large-scale data processing. These experiences highlighted the essential role HPC plays in promoting scientific discovery and interdisciplinary collaboration. In this talk, I will explore how HPC can help break existing silos, enhance computational efficiency, and ensure sustainable development by facilitating innovation across various research domains.
The ocean is a fundamental driver of Earth’s climate, yet climate change and human activities are altering its physical, chemical, and biological processes. Rising sea levels, marine heatwaves, ocean acidification, and biodiversity loss pose significant threats to marine ecosystems and the communities that depend on them. These challenges are not uniform—coastal regions, small island nations, and marginalized communities, including women who play key roles in fisheries, ocean governance, and marine research, are often the most affected. Tackling these interconnected issues requires breaking silos across disciplines, geographies, and governance structures. Oceanography, climate science, and socio-economic studies must work in tandem, leveraging high-performance computing (HPC) to integrate climate models, satellite data, and regional ocean observations. Women scientists and policymakers are at the forefront of this collaborative effort, advancing scientific innovation, data-driven decision-making, and inclusive marine resource management. Their contributions help bridge research and policy, ensuring that diverse perspectives shape sustainable ocean governance. Global initiatives like the UN High Seas Treaty and Sustainable Development Goals (SDGs) highlight the importance of interdisciplinary cooperation. By fostering integrated, cross-sectoral action, we can strengthen ocean resilience and safeguard marine ecosystems for future generations.
Over the last two decades, scientific workflow management systems (WMS) have enabled the execution of complex, multi-task applications on a variety of computational platforms, including today’s exascale systems. They ensure efficient execution of computational and data management tasks, adhering to their data and control dependencies. During workflow execution, WMSs monitor the execution of tasks, detect anomalies and failures, and deploy recovery mechanisms when needed. However, as workflows and cyberinfrastructure grow in scale, heterogeneity, and complexity, traditional WMS approaches face challenges in adaptability and resilience. This talk will provide examples of modern workflow applications from a variety of scientific domains, describe their challenges, and approaches to managing these execution in heterogeneous environments.
Today’s large-scale AI model training and serving jobs require many hardware accelerators to run, making these jobs extremely costly and power-hungry. Yet despite requiring many GPUs to run, AI jobs often underutilize individual GPUs for a variety of reasons, including data preprocessing stalls, communication stalls, low batching opportunities, and imbalanced memory and compute usage of individual operators within a job. This inefficient use of hardware accelerators further increases costs. In this talk, we will discuss why optimizing hardware accelerator (e.g., GPU) utilization is key to improving the cost and energy efficiency of AI workloads and how we can achieve this. I will present several computer systems that we are building as part of the Swiss AI initiative to optimize GPU cluster configurations and job parallelization strategies for distributed AI training jobs and efficiently share GPUs while maximizing performance.