Generalized User-Oriented Image Semantic Coding Empowered by Large Vision-Language Model

Sin-Yu Huang, Vincent W. S. Wong

公開日: 2025/9/10

Abstract

Semantic communication has shown outstanding performance in preserving the overall source information in wireless transmission. For semantically rich content such as images, human users are often interested in specific regions depending on their intent. Moreover, recent semantic coding models are mostly trained on specific datasets. However, real-world applications may involve images out of the distribution of training dataset, which makes generalization a crucial but largely unexplored problem. To incorporate user's intent into semantic coding, in this paper, we propose a generalized user-oriented image semantic coding (UO-ISC) framework, where the user provides a text query indicating its intent. The transmitter extracts features from the source image which are relevant to the user's query. The receiver reconstructs an image based on those features. To enhance the generalization ability, we integrate contrastive language image pre-training (CLIP) model, which is a pretrained large vision-language model (VLM), into our proposed UO-ISC framework. To evaluate the relevance between the reconstructed image and the user's query, we introduce the user-intent relevance loss, which is computed by using a pretrained large VLM, large language-and-vision assistant (LLaVA) model. When performing zero-shot inference on unseen objects, simulation results show that the proposed UO-ISC framework outperforms the state-of-the-art query-aware image semantic coding in terms of the answer match rate.

Generalized User-Oriented Image Semantic Coding Empowered by Large Vision-Language Model | SummarXiv | SummarXiv