Accelerating cosmological simulations on GPUs: a portable approach using OpenMP
M. D. Lepinzan, G. Lacopo, D. Goz, G. Taffoni, P. Monaco, P. J. Elahi, U. Varetto, M. Cytowski
Published: 2025/10/3
Abstract
In this work we present the porting to Graphics Processing Units (GPUs, using OpenMP target directives) and optimization of a key module within the cosmological {\pinocchio} code, a Lagrangian Perturbation Theory (LPT)-based framework widely used for generating dark matter (DM) halo catalogs. Our optimization focuses on a specific segment of the code responsible for calculating the collapse time of each particle involved in the simulation. Due to the embarrassingly parallel nature of this computation, it represents an ideal candidate for GPU offloading. As part of the porting process, we developed fully GPU-native implementations of both cubic spline and bilinear interpolation routines, required for evaluating collapse times. Since GNU Scientific Library (GSL) does not support GPU offloading, these custom implementations run entirely on the GPU and achieve residuals of only $\sim0.003\%$ when compared to the CPU-based implementation of GSL. Comparative benchmarking on the LEONARDO (NVIDIA-based) and SETONIX (AMD-based) supercomputers reveals notable portability and performance, with speedups of~\textit{4x} and up to~\textit{8x}, respectively. While collapse time calculation is not a primary bottleneck in the overall workflow, the acceleration reduces full production runs by $\sim 100$ seconds each leading to a cumulative saving of $\sim 160000$ Standard-h ($\sim28$ hours wall time) across thousands of simulations. Roofline analysis confirms that our GPU porting achieves over 80\% of the theoretical FP64 peak performance, confirming efficient compute-bound execution. This work demonstrates that OpenMP directives offer a portable, effective strategy for accelerating large-scale cosmological simulations on heterogeneous hardware.