MOCHA: Multi-modal Objects-aware Cross-arcHitecture Alignment

Elena Camuffo, Francesco Barbato, Mete Ozay, Simone Milani, Umberto Michieli

Published: 2025/9/17

Abstract

We introduce MOCHA (Multi-modal Objects-aware Cross-arcHitecture Alignment), a knowledge distillation approach that transfers region-level multimodal semantics from a large vision-language teacher (e.g., LLaVa) into a lightweight vision-only object detector student (e.g., YOLO). A translation module maps student features into a joint space, where the training of the student and translator is guided by a dual-objective loss that enforces both local alignment and global relational consistency. Unlike prior approaches focused on dense or global alignment, MOCHA operates at the object level, enabling efficient transfer of semantics without modifying the teacher or requiring textual input at inference. We validate our method across four personalized detection benchmarks under few-shot regimes. Results show consistent gains over baselines, with a +10.1 average score improvement. Despite its compact architecture, MOCHA reaches performance on par with larger multimodal models, proving its suitability for real-world deployment.

MOCHA: Multi-modal Objects-aware Cross-arcHitecture Alignment | SummarXiv | SummarXiv