GAN-Based Multi-Microphone Spatial Target Speaker Extraction

Shrishti Saha Shetu, Emanuël A. P. Habets, Andreas Brendel

Published: 2025/9/22

Abstract

Spatial target speaker extraction isolates a desired speaker's voice in multi-speaker environments using spatial information, such as the direction of arrival (DoA). Although recent deep neural network (DNN)-based discriminative methods have shown significant performance improvements, the potential of generative approaches, such as generative adversarial networks (GANs), remains largely unexplored for this problem. In this work, we demonstrate that a GAN can effectively leverage both noisy mixtures and spatial information to extract and generate the target speaker's speech. By conditioning the GAN on intermediate features of a discriminative spatial filtering model in addition to DoA, we enable steerable target extraction with high spatial resolution of 5 degrees, outperforming state-of-the-art discriminative methods in perceptual quality-based objective metrics.

GAN-Based Multi-Microphone Spatial Target Speaker Extraction | SummarXiv | SummarXiv