Towards channel foundation models (CFMs): Motivations, methodologies and opportunities

Jun Jiang, Yuan Gao, Xinyi Wu, Shugong Xu

Published: 2025/7/18

Abstract

Artificial intelligence (AI) has emerged as a pivotal enabler for next-generation wireless communication systems. However, conventional AI-based models encounter several limitations, such as heavy reliance on labeled data, limited generalization capability, and task-specific design. To address these challenges, this paper introduces, for the first time, the concept of channel foundation models (CFMs)-a novel and unified framework designed to tackle a wide range of channel-related tasks through a pretrained, universal channel feature extractor. By leveraging advanced AI architectures and self-supervised learning techniques, CFMs are capable of effectively exploiting large-scale unlabeled data without the need for extensive manual annotation. We further analyze the evolution of AI methodologies, from supervised learning and multi-task learning to self-supervised learning, emphasizing the distinct advantages of the latter in facilitating the development of CFMs. Additionally, we provide a comprehensive review of existing studies on self-supervised learning in this domain, categorizing them into generative, discriminative and the combined paradigms. Given that the research on CFMs is still at an early stage, we identify several promising future research directions, focusing on model architecture innovation and the construction of high-quality, diverse channel datasets.

Towards channel foundation models (CFMs): Motivations, methodologies and opportunities | SummarXiv | SummarXiv