Analysis of Error Sources in LLM-based Hypothesis Search for Few-Shot Rule Induction
Aishni Parab, Hongjing Lu, Ying Nian Wu, Sumit Gulwani
Published: 2025/8/31
Abstract
Inductive reasoning enables humans to infer abstract rules from limited examples and apply them to novel situations. In this work, we compare an LLM-based hypothesis search framework with direct program generation approaches on few-shot rule induction tasks. Our findings show that hypothesis search achieves performance comparable to humans, while direct program generation falls notably behind. An error analysis reveals key bottlenecks in hypothesis generation and suggests directions for advancing program induction methods. Overall, this paper underscores the potential of LLM-based hypothesis search for modeling inductive reasoning and the challenges in building more efficient systems.