A Large-Scale Probing Analysis of Speaker-Specific Attributes in Self-Supervised Speech Representations
Aemon Yat Fei Chiu, Kei Ching Fung, Roger Tsz Yeung Li, Jingyu Li, Tan Lee
Published: 2025/1/9
Abstract
Speech self-supervised learning (SSL) models are known to learn hierarchical representations, yet how they encode different speaker-specific attributes remains under-explored. This study investigates the layer-wise disentanglement of speaker information across multiple speech SSL model families and their variants. Drawing from phonetic frameworks, we conduct a large-scale probing analysis of attributes categorised into functional groups: Acoustic (Gender), Prosodic (Pitch, Tempo, Energy), and Paralinguistic (Emotion), which we use to deconstruct the model's representation of Speaker Identity. Our findings validate a consistent three-stage hierarchy: initial layers encode fundamental timbre and prosody; middle layers synthesise abstract traits; and final layers suppress speaker identity to abstract linguistic content. An ablation study shows that while specialised speaker embeddings excel at identifying speaker identity, the intermediate layers of speech SSL models better represent dynamic prosody. This work is the first large-scale study covering a wide range of speech SSL model families and variants with fine-grained speaker-specific attributes on how they hierarchically separate the dynamic style of speech from its intrinsic characteristics, offering practical implications for downstream tasks.