Probing the Limits of Stylistic Alignment in Vision-Language Models

Asma Farajidizaji, Akash Gupta, Vatsal Raina

公開日: 2025/9/29

Abstract

Vision-language models are increasingly used to generate image captions in specific styles, such as humor or romantic. However, these transformer-based models often struggle with this subjective task in a zero-shot setting. While preference data can be used to align them toward a desired style, such data is expensive to acquire, limiting the ability to explore the models' full capabilities. This work addresses this by studying the data efficiency of aligning small vision-language models to humor and romantic styles. This approach helps to define the performance limits of these models and determine how little preference data is needed to achieve stylistic saturation, benchmarking their capabilities and limitations.

Probing the Limits of Stylistic Alignment in Vision-Language Models | SummarXiv | SummarXiv