Efficient Domain Generalization in Wireless Networks with Scarce Multi-Modal Data
Minsu Kim, Walid Saad, Dour Calin
Published: 2025/10/5
Abstract
In 6G wireless networks, multi-modal ML models can be leveraged to enable situation-aware network decisions in dynamic environments. However, trained ML models often fail to generalize under domain shifts when training and test data distributions are different because they often focus on modality-specific spurious features. In practical wireless systems, domain shifts occur frequently due to dynamic channel statistics, moving obstacles, or hardware configuration. Thus, there is a need for learning frameworks that can achieve robust generalization under scarce multi-modal data in wireless networks. In this paper, a novel and data-efficient two-phase learning framework is proposed to improve generalization performance in unseen and unfamiliar wireless environments with minimal amount of multi-modal data. In the first stage, a physics-based loss function is employed to enable each BS to learn the physics underlying its wireless environment captured by multi-modal data. The data-efficiency of the physics-based loss function is analytically investigated. In the second stage, collaborative domain adaptation is proposed to leverage the wireless environment knowledge of multiple BSs to guide under-performing BSs under domain shift. Specifically, domain-similarity-aware model aggregation is proposed to utilize the knowledge of BSs that experienced similar domains. To validate the proposed framework, a new dataset generation framework is developed by integrating CARLA and MATLAB-based mmWave channel modeling to predict mmWave RSS. Simulation results show that the proposed physics-based training requires only 13% of data samples to achieve the same performance as a state-of-the-art baseline that does not use physics-based training. Moreover, the proposed collaborative domain adaptation needs only 25% of data samples and 20% of FLOPs to achieve the convergence compared to baselines.