Difference-Guided Reasoning: A Temporal-Spatial Framework for Large Language Models

Hong Su

公開日: 2025/9/25

Abstract

Large Language Models (LLMs) are important tools for reasoning and problem-solving, while they often operate passively, answering questions without actively discovering new ones. This limitation reduces their ability to simulate human-like thinking, where noticing differences is a key trigger for reasoning. Thus, in this paper we propose a difference-guided reasoning framework, which enables LLMs to identify and act upon changes across time and space. The model formalizes differences through feature extraction, prioritizes the most impactful and latest changes, and links them to appropriate actions. We further extend the framework with mechanisms for abnormal behavior detection and the integration of external information from users or sensors, ensuring more reliable and grounded reasoning. Verification results show that prompting LLMs with differences improves focus on critical issues, leading to higher alignment with desired reasoning outcomes compared to direct prompting.