Data-Free Knowledge Distillation for LiDAR-Aided Beam Tracking in MmWave Systems
Abolfazl Zakeri, Nhan Thanh Nguyen, Ahmed Alkhateeb, Markku Juntti
Published: 2025/9/23
Abstract
Multimodal sensing reduces beam training overhead but is constrained by machine learning complexity and dataset demands. To address this, we propose a data-free (DF) knowledge distillation (KD) framework for efficient LiDAR-aided mmWave beam tracking, i.e., predicting the best current and future beams. Specifically, we propose a knowledge inversion framework, where a generator synthesizes LiDAR input data from random noise, guided by a loss function defined on the features and outputs of a pre-trained teacher model. The student model is then trained using the synthetic data and knowledge distilled from the teacher. The generator loss combines three terms, called metadata loss, activation loss, and entropy loss. For student training, in addition to the standard Kullback-Leibler divergence loss, we also consider a mean-squared error (MSE) loss between the teacher and student logits. Simulation results show that the proposed DF-KD (slightly) outperforms the teacher in Top-1 and Top-5 accuracies. Moreover, we observe that the metadata loss contributes significantly to the generator performance, and that the MSE loss for the student can effectively replace the standard KD loss while requiring fewer fine-tuned hyperparameters.