Visualizing Thought: Conceptual Diagrams Enable Robust Combinatorial Planning in LMMs

Nasim Borazjanizadeh, Roei Herzig, Eduard Oks, Trevor Darrell, Rogerio Feris, Leonid Karlinsky

Published: 2025/3/14

Abstract

Human reasoning relies on constructing and manipulating mental models -- simplified internal representations of situations that we use to understand and solve problems. Conceptual diagrams (e.g., a sketch drawn by a human to aid reasoning) externalize these mental models, abstracting irrelevant details to efficiently capture how entities interact with each other. In contrast, Large Language Models (LLMs) and Large MultiModal Models (LMMs) predominantly reason through text, limiting their effectiveness in complex multi-step tasks. In this paper, we propose Visual Thinking, a zero-shot framework that enables LMMs to reason through multiple chains of (self-generated) conceptual diagrams, significantly enhancing their combinatorial planning capabilities. Our approach does not require any human initialization beyond the natural language description of the task. It integrates both textual and diagrammatic reasoning within an optimized Graph-of-Thought inference framework, enhanced by beam search and depth-wise backtracking. Evaluated on multiple challenging PDDL planning domains, our method substantially improves LMMs' performance (e.g., GPT-4o: 35.5% -> 90.2% in Blocksworld) and consistently outperforms other text-only search-based inference methods. On more difficult planning domains with solution depths up to 40, our approach outperforms even the o1-preview reasoning model (e.g., 16 percentage points improvement in Floor Tiles). These results highlight the value of conceptual diagrams as a reasoning medium in LMMs.