Back

Minisymposium Presentation

Design and Use of Energy-Efficient Systems in the Deep Learning Era

Tuesday, June 17, 2025
16:30
-
17:00
CEST
Climate, Weather and Earth Sciences
Climate, Weather and Earth Sciences
Climate, Weather and Earth Sciences
Chemistry and Materials
Chemistry and Materials
Chemistry and Materials
Computer Science and Applied Mathematics
Computer Science and Applied Mathematics
Computer Science and Applied Mathematics
Humanities and Social Sciences
Humanities and Social Sciences
Humanities and Social Sciences
Engineering
Engineering
Engineering
Life Sciences
Life Sciences
Life Sciences
Physics
Physics
Physics

Presenter

Pamela
Delgado
-
HES-SO

I’m Pamela Delgado, an Assistant Professor at HES-SO and a lecturer at EPFL in Switzerland. My research focuses on efficiently managing large-scale/limited resources at the intersection between systems and machine learning. At HES-SO, I am a member of its AI center for SMEs and the steering committee of the Computer Science program. I am also a member of the Swiss Young Academies, a PI at the Swiss AI initiative and part of the SLICES-CH project, of the European SLICES project.Before joining HES-SO, I worked as a senior computer scientist at the Swiss Data Science Center (SDSC) as part of the Renku data science platform team.I received my Ph.D. from EPFL in 2018 where I was advised by Willy Zwaenepoel. My dissertation was distinguished with an honorable mention of the Distinguished Disertation award from SPEC and nominated for the EPFL’s Doctoral Program Distinction. My PhD research was generously supported by Microsoft Research as part of the Swiss Joint Research Center. I also obtained the Google Anita Borg Memorial Scholarship. I have been a research intern at Microsoft Research in Cambridge, UK.

Description

Modern GPUs, together with larger datasets, facilitate the exponential growth and adoption of deep learning models. The training and deployment of deep neural networks in widely used large-scale data centers, on the other hand, exhibit low GPU hardware utilization, barely reaching 50%, as shown by studies done on Microsoft and Alibaba clusters. This is a waste of hardware resources, especially for expensive GPUs, and contributes to the unsustainable carbon footprint of AI. In fact, less than 15% of large-language model training can lead to a carbon footprint equivalent to the average yearly energy consumption of a US household.This talk will discuss the reasons, challenges and opportunities for designing energy-efficient computing infrastructures as well as its practical applications.

Authors