Temporally Heterogeneous Graph Contrastive Learning for Multimodal Acoustic event Classification

Yuanjian Chen, Yang Xiao, Jinjie Huang

公開日: 2025/9/18

Abstract

Multimodal acoustic event classification plays a key role in audio-visual systems. Although combining audio and visual signals improves recognition, it is still difficult to align them over time and to reduce the effect of noise across modalities. Existing methods often treat audio and visual streams separately, fusing features later with contrastive or mutual information objectives. Recent advances explore multimodal graph learning, but most fail to distinguish between intra- and inter-modal temporal dependencies. To address this, we propose Temporally Heterogeneous Graph-based Contrastive Learning (THGCL). Our framework constructs a temporal graph for each event, where audio and video segments form nodes and their temporal links form edges. We introduce Gaussian processes for intra-modal smoothness, Hawkes processes for inter-modal decay, and contrastive learning to capture fine-grained relationships. Experiments on AudioSet show that THGCL achieves state-of-the-art performance.

Temporally Heterogeneous Graph Contrastive Learning for Multimodal Acoustic event Classification | SummarXiv | SummarXiv