RACap: Relation-Aware Prompting for Lightweight Retrieval-Augmented Image Captioning

Xiaosheng Long, Hanyu Wang, Zhentao Song, Kun Luo, Hongde Liu

Published: 2025/9/19

Abstract

Recent retrieval-augmented image captioning methods incorporate external knowledge to compensate for the limitations in comprehending complex scenes. However, current approaches face challenges in relation modeling: (1) the representation of semantic prompts is too coarse-grained to capture fine-grained relationships; (2) these methods lack explicit modeling of image objects and their semantic relationships. To address these limitations, we propose RACap, a relation-aware retrieval-augmented model for image captioning, which not only mines structured relation semantics from retrieval captions, but also identifies heterogeneous objects from the image. RACap effectively retrieves structured relation features that contain heterogeneous visual information to enhance the semantic consistency and relational expressiveness. Experimental results show that RACap, with only 10.8M trainable parameters, achieves superior performance compared to previous lightweight captioning models.

RACap: Relation-Aware Prompting for Lightweight Retrieval-Augmented Image Captioning | SummarXiv | SummarXiv