Knowledge-Augmented Vision Language Models for Underwater Bioacoustic Spectrogram Analysis

Ragib Amin Nihal, Benjamin Yen, Takeshi Ashizawa, Kazuhiro Nakadai

Published: 2025/9/6

Abstract

Marine mammal vocalization analysis depends on interpreting bioacoustic spectrograms. Vision Language Models (VLMs) are not trained on these domain-specific visualizations. We investigate whether VLMs can extract meaningful patterns from spectrograms visually. Our framework integrates VLM interpretation with LLM-based validation to build domain knowledge. This enables adaptation to acoustic data without manual annotation or model retraining.

Knowledge-Augmented Vision Language Models for Underwater Bioacoustic Spectrogram Analysis | SummarXiv | SummarXiv