A Markov Categorical Framework for Language Modeling

Yifan Zhang

公開日: 2025/7/25

Abstract

Autoregressive language models achieve remarkable performance, yet a unified theory explaining their internal mechanisms--how training shapes their representations and enables complex behaviors--remains elusive. We introduce a new analytical framework that models the single-step generation process as a composition of information-processing stages using the language of Markov categories. This compositional perspective provides a unified mathematical language to connect three critical aspects of language modeling that are typically studied in isolation: the training objective, the geometry of the learned representation space, and practical model capabilities. First, our framework provides a precise information-theoretic rationale for the success of multi-token prediction methods like speculative decoding, quantifying the "information surplus" a model's hidden state contains about tokens beyond the immediate next one. Second, we clarify how the standard negative log-likelihood (NLL) objective compels the model to learn not just the next word, but also the data's intrinsic conditional uncertainty, a process we formalize using categorical entropy. Our central result reveals that NLL training functions as an implicit form of spectral contrastive learning. We prove that, for common model architectures, this simple predictive objective forces the model to sculpt a geometrically structured representation space, implicitly aligning representations with the eigenspectrum of a "predictive similarity" operator. This work offers a powerful new lens to understand how information flows through a model and how the training objective shapes its internal geometry, thereby bridging the gap between learning theory and the practical success of large language models.

A Markov Categorical Framework for Language Modeling | SummarXiv | SummarXiv