Galaxy mergers classification using CNNs trained on Sérsic models, residuals and raw images

D. M. Chudy, W. J. Pearson, A. Pollo, L. E. Suelves, B. Margalef-Bentabol, L. Wang, V. Rodriguez-Gomez, A. La Marca

Published: 2025/2/23

Abstract

Galaxy mergers are crucial for understanding galaxy evolution, and with large upcoming datasets, automated methods, such as Convolutional Neural Networks (CNNs), are needed for efficient detection. It is understood that these networks work by identifying deviations from the regular, expected shapes of galaxies, which are indicative of a merger event. Using images from the IllustrisTNG simulations, we aim to check the importance of faint features, source position and shape information present in galaxy merger images on the performance of a CNN merger vs. non-merger classifier. We fit S\'ersic profiles to each galaxy in mock images from the IllustrisTNG simulations. We subtract the profiles from the original images to create residual images, and we train three identical CNNs on three different datasets -- original images (CNN1), model images (CNN2), and residual images (CNN3). We found that it is possible to conduct galaxy merger classification based only on faint features, source position and shape information present in residual images and model images, respectively. The results show that the CNN1 correctly classifies 74% of images, while CNN2 70%, and CNN3 68%. Source position and shape information is crucial for pre-merger classification, while residual features are important for post-merger classification. CNN3 classifies post-mergers in the latest merger stage the best out of all three classifiers.

Galaxy mergers classification using CNNs trained on Sérsic models, residuals and raw images | SummarXiv | SummarXiv