Interpreting Transformer Architectures as Implicit Multinomial Regression

Jonas A. Actor, Anthony Gruber, Eric C. Cyr

Published: 2025/9/4

Abstract

Mechanistic interpretability aims to understand how internal components of modern machine learning models, such as weights, activations, and layers, give rise to the model's overall behavior. One particularly opaque mechanism is attention: despite its central role in transformer models, its mathematical underpinnings and relationship to concepts like feature polysemanticity, superposition, and model performance remain poorly understood. This paper establishes a novel connection between attention mechanisms and multinomial regression. Specifically, we show that in a fixed multinomial regression setting, optimizing over latent features yields optimal solutions that align with the dynamics induced by attention blocks. In other words, the evolution of representations through a transformer can be interpreted as a trajectory that recovers the optimal features for classification.

Interpreting Transformer Architectures as Implicit Multinomial Regression | SummarXiv | SummarXiv