Language Models Do Not Have Human-Like Working Memory
Jen-tse Huang, Kaiser Sun, Wenxuan Wang, Mark Dredze
Published: 2025/4/30
Abstract
While Large Language Models (LLMs) exhibit remarkable reasoning abilities, we demonstrate that they fundamentally lack a core aspect of human cognition: working memory. Human working memory is an active cognitive system that enables not only the temporary storage of information but also its processing and utilization. Without working memory, individuals may produce unreal conversations, exhibit self-contradiction, and struggle with tasks requiring mental reasoning. Existing evaluations using N-back or context-dependent tasks fail as they allow LLMs to exploit accessible context rather than retain latent information. We introduce three novel tasks, (1) Number Guessing, (2) Yes-No Deduction, and Math Magic, that isolate internal representation from external context. Across seventeen frontier models spanning four major model families, we consistently observe irrational or contradictory behaviors, highlighting LLMs' inability to retain and manipulate latent information. Our work establishes a new benchmark for evaluating working memory in LLMs and identifies this deficit as a critical obstacle to artificial general intelligence. Code and prompts for the experiments are available at https://github.com/penguinnnnn/LLM-Working-Memory.