Towards High-Fidelity and Controllable Bioacoustic Generation via Enhanced Diffusion Learning

Tianyu Song, Ton Viet Ta

公開日: 2025/8/30

Abstract

Generative modeling offers new opportunities for bioacoustics, enabling the synthesis of realistic animal vocalizations that could support biomonitoring efforts and supplement scarce data for endangered species. However, directly generating bird call waveforms from noisy field recordings remains a major challenge. We propose BirdDiff, a generative framework designed to synthesize bird calls from a noisy dataset of 12 wild bird species. The model incorporates a "zeroth layer" stage for multi-scale adaptive bird-call enhancement, followed by a diffusion-based generator conditioned on three modalities: Mel-frequency cepstral coefficients, species labels, and textual descriptions. The enhancement stage improves signal-to-noise ratio (SNR) while minimizing spectral distortion, achieving the highest SNR gain (+10.45 dB) and lowest Itakura-Saito Distance (0.54) compared to three widely used non-training enhancement methods. We evaluate BirdDiff against a baseline generative model, DiffWave. Our method yields substantial improvements in generative quality metrics: Fr\'echet Audio Distance (0.590 to 0.213), Jensen-Shannon Divergence (0.259 to 0.226), and Number of Statistically-Different Bins (7.33 to 5.58). To assess species-specific detail preservation, we use a ResNet50 classifier trained on the original dataset to identify generated samples. Classification accuracy improves from 35.9% (DiffWave) to 70.1% (BirdDiff), with 8 of 12 species exceeding 70% accuracy. These results demonstrate that BirdDiff enables high-fidelity, controllable bird call generation directly from noisy field recordings.

Towards High-Fidelity and Controllable Bioacoustic Generation via Enhanced Diffusion Learning | SummarXiv | SummarXiv