ARK-V1: An LLM-Agent for Knowledge Graph Question Answering Requiring Commonsense Reasoning

Jan-Felix Klein, Lars Ohnemus

Published: 2025/9/22

Abstract

Large Language Models (LLMs) show strong reasoning abilities but rely on internalized knowledge that is often insufficient, outdated, or incorrect when trying to answer a question that requires specific domain knowledge. Knowledge Graphs (KGs) provide structured external knowledge, yet their complexity and multi-hop reasoning requirements make integration challenging. We present ARK-V1, a simple KG-agent that iteratively explores graphs to answer natural language queries. We evaluate several not fine-tuned state-of-the art LLMs as backbones for ARK-V1 on the CoLoTa dataset, which requires both KG-based and commonsense reasoning over long-tail entities. ARK-V1 achieves substantially higher conditional accuracies than Chain-of-Thought baselines, and larger backbone models show a clear trend toward better coverage, correctness, and stability.

ARK-V1: An LLM-Agent for Knowledge Graph Question Answering Requiring Commonsense Reasoning | SummarXiv | SummarXiv