SALM: Spatial Audio Language Model with Structured Embeddings for Understanding and Editing

Jinbo Hu, Yin Cao, Ming Wu, Zhenbo Luo, Jun Yang

Published: 2025/7/22

Abstract

Spatial audio understanding is essential for accurately perceiving and interpreting acoustic environments. However, existing audio-language models exhibit limitations in processing spatial audio and perceiving spatial acoustic scenes. To address this gap, we propose the Spatial Audio Language Model (SALM), a novel framework that bridges spatial audio and language through multi-modal contrastive learning. SALM integrates a text encoder with a dual-branch audio encoder that decomposes spatial sound into semantic and spatial components via structured audio embeddings. Key features of SALM include seamless alignment between spatial audio and natural language, both separate and joint extraction of spatial and semantic representations, zero-shot direction classification, and flexible support for spatial audio editing. Experimental results demonstrate that SALM effectively captures and aligns cross-modal representations, yielding well-structured audio embeddings. Furthermore, SALM enables advanced editing capabilities, such as modifying directional audio using text-based embeddings.

SALM: Spatial Audio Language Model with Structured Embeddings for Understanding and Editing | SummarXiv | SummarXiv