Blind Spot Navigation in Large Language Model Reasoning with Thought Space Explorer
Jinghan Zhang, Fengran Mo, Tharindu Cyril Weerasooriya, Yeyang Zhou, Xinyue Ye, Dongjie Wang, Yanjie Fu, Kunpeng Liu
Published: 2024/10/31
Abstract
Recent advances in large language models (LLMs) have demonstrated their potential in handling complex reasoning tasks, which are usually achieved by constructing a thought chain to guide the model in solving the problem with multi-step thinking. However, existing methods often remain confined to previously explored solution spaces and thus overlook the critical blind spot within LLMs' cognitive range. To address these issues, we introduce the ``Thought Space Explorer'' (TSE), a novel framework to expand and optimize thought structures to guide LLMs to explore their blind spots of thinking. By generating new reasoning steps and branches based on the original thought structure with various designed strategies, TSE broadens the thought exploration view and alleviates the impact of blind spots for LLM reasoning. Experimental results on multiple levels of reasoning tasks demonstrate the efficacy of TSE by surpassing various baseline methods. We also conduct extensive analysis to understand how structured and expansive thought can contribute to unleashing the potential of LLM reasoning capabilities.