Self-supervised Radio Representation Learning: Can we Learn Multiple Tasks?

Ogechukwu Kanu, Ashkan Eshaghbeigi, Hatem Abou-Zeid

Published: 2025/9/3

Abstract

Artificial intelligence (AI) is anticipated to play a pivotal role in 6G. However, a key challenge in developing AI-powered solutions is the extensive data collection and labeling efforts required to train supervised deep learning models. To overcome this, self-supervised learning (SSL) approaches have recently demonstrated remarkable success across various domains by leveraging large volumes of unlabeled data to achieve near-supervised performance. In this paper, we propose an effective SSL scheme for radio signal representation learning using momentum contrast. By applying contrastive learning, our method extracts robust, transferable representations from a large real-world dataset. We assess the generalizability of these learned representations across two wireless communications tasks: angle of arrival (AoA) estimation and automatic modulation classification (AMC). Our results show that carefully designed augmentations and diverse data enable contrastive learning to produce high-quality, invariant latent representations. These representations are effective even with frozen encoder weights, and fine-tuning further enhances performance, surpassing supervised baselines. To the best of our knowledge, this is the first work to propose and demonstrate the effectiveness of self-supervised learning for radio signals across multiple tasks. Our findings highlight the potential of self-supervised learning to transform AI for wireless communications by reducing dependence on labeled data and improving model generalization - paving the way for scalable foundational 6G AI models and solutions.

Self-supervised Radio Representation Learning: Can we Learn Multiple Tasks? | SummarXiv | SummarXiv