A Vision-Language Agent System for Compositional Reasoning with VLM-assisted Script and Executable Generation

Yichang Xu, Gaowen Liu, Ramana Rao Kompella, Sihao Hu, Tiansheng Huang, Fatih Ilhan, Selim Furkan Tekin, Zachary Yahn, Ling Liu

Published: 2025/6/9

Abstract

The advancement in large language models (LLMs) and large vision models has fueled the rapid progress in multi-modal vision-text reasoning capabilities. However, existing vision-language models (VLMs) to date offer poor performance for compositional reasoning. This paper presents VLAgent, a vision-language agent system for vision-text compositional reasoning with three novel features. First, VLAgent leverages a pre-trained LLM with few-shot context learning to generate the planning script for each compositional reasoning task and provides a backend engine to generate and perform executable runtime, which maps the planning script into executable code using the VLAgent library for VLAgent executor. Second, VLAgent introduces the SS-parser, which identifies and corrects logic errors embedded in the LLM-generated planning script, to further enhance the quality of script-executable mapping. Third, VLAgent introduces the compositional reasoning output verifier, which validates and refines the output of complex compositional reasoning steps, by leveraging complementary reasoning techniques, e.g., ensemble learning and caption analysis. Extensive experiments are conducted on six visual benchmarks and compared to a dozen of the SoTA visual reasoning models. The results show that VLAgent outperforms existing representative approaches for compositional text-visual reasoning. Our code and datasets with outputs will be made available upon acceptance.

A Vision-Language Agent System for Compositional Reasoning with VLM-assisted Script and Executable Generation | SummarXiv | SummarXiv