Identifying and Answering Questions with False Assumptions: An Interpretable Approach

Zijie Wang, Eduardo Blanco

公開日: 2025/8/21

Abstract

People often ask questions with false assumptions, a type of question that does not have regular answers. Answering such questions requires first identifying the false assumptions. Large Language Models (LLMs) often generate misleading answers to these questions because of hallucinations. In this paper, we focus on identifying and answering questions with false assumptions in several domains. We first investigate whether the problem reduces to fact verification. Then, we present an approach leveraging external evidence to mitigate hallucinations. Experiments with five LLMs demonstrate that (1) incorporating retrieved evidence is beneficial and (2) generating and validating atomic assumptions yields more improvements and provides an interpretable answer by pinpointing the false assumptions.

Identifying and Answering Questions with False Assumptions: An Interpretable Approach | SummarXiv | SummarXiv