Cross-Modal Knowledge Distillation for Speech Large Language Models

Enzhi Wang, Qicheng Li, Zhiyuan Tang, Yuhang Jia

公開日: 2025/9/18

Abstract

In this work, we present the first systematic evaluation of catastrophic forgetting and modality inequivalence in speech large language models, showing that introducing speech capabilities can degrade knowledge and reasoning even when inputs remain textual, and performance further decreases with spoken queries. To address these challenges, we propose a cross-modal knowledge distillation framework that leverages both text-to-text and speech-to-text channels to transfer knowledge from a text-based teacher model to a speech LLM. Extensive experiments on dialogue and audio understanding tasks validate the effectiveness of our approach in preserving textual knowledge, improving cross-modal alignment, and enhancing reasoning in speech-based interactions.

Cross-Modal Knowledge Distillation for Speech Large Language Models | SummarXiv | SummarXiv