Back

Minisymposium Presentation

Parallelizing GaPSE.jl with KernelAbstraction.jl: A Real-World Example of Reproducibility in Julia

Tuesday, June 17, 2025
16:30
-
17:00
CEST
Climate, Weather and Earth Sciences
Climate, Weather and Earth Sciences
Climate, Weather and Earth Sciences
Chemistry and Materials
Chemistry and Materials
Chemistry and Materials
Computer Science and Applied Mathematics
Computer Science and Applied Mathematics
Computer Science and Applied Mathematics
Humanities and Social Sciences
Humanities and Social Sciences
Humanities and Social Sciences
Engineering
Engineering
Engineering
Life Sciences
Life Sciences
Life Sciences
Physics
Physics
Physics

Presenter

Matteo
Foglieni
-
Leibniz Supercomputing Centre

MSc in Physics, University of Milan. Scientific employee at the Leibniz Supercomputing Centre (LRZ).

Description

Julia is gaining traction in scientific computing, and at the Leibniz Supercomputing Centre (LRZ), we are exploring its potential on our high-performance computing (HPC) system, particularly on our Intel Ponte Vecchio GPUs of the SuperMUC-NG Phase 2 supercomputer. The Julia package KernelAbstraction.jl enables vendor-agnostic parallelization, allowing developers to write kernels that run efficiently on both multi-threaded CPUs and various GPU architectures with minimal modifications. This ability to write a single single-source, hardware-agnostic kernel bridges the gap between different hardware backends and enhances the reproducibility of both results and performance across diverse computing environments. To evaluate its real-world impact, we apply KernelAbstraction.jl to GaPSE.jl, a cosmology program that computes Two-Point Correlation Functions of galaxies including General Relativistic effects. GaPSE.jl needs to perform numerous nested integrals, which can be computationally expensive. By leveraging parallel execution on CPUs and GPUs, we aim to significantly accelerate these calculations, improving efficiency and scalability. In this talk, we share our experience developing and optimizing kernels with KernelAbstraction.jl, we benchmark Julia’s performance on HPC, and we show how reproducibility is ensured in a real-world application.

Authors