Graded Transformers

Tony Shaska Sr

公開日: 2025/7/27

Abstract

We introduce the Graded Transformer framework, a new class of sequence models that embeds algebraic inductive biases through grading transformations on vector spaces. Extending Graded Neural Networks (GNNs), we propose two architectures: the Linearly Graded Transformer (LGT) and the Exponentially Graded Transformer (EGT). These models apply parameterized scaling operators, governed by fixed or learnable grading tuples and in the case of EGT exponential factors, to encode hierarchical structure in attention and representation layers and to improve efficiency for structured data. We establish rigorous guarantees, including universal approximation theorems for continuous and Sobolev functions, reduced sample complexity via effective VC dimension bounds, Lipschitz continuity of graded operations, and robustness to perturbations. A graded loss ensures gradient stability and alignment with domain priors during optimization. By treating grades as differentiable parameters, the framework enables adaptive feature prioritization, overcoming limitations of fixed grades in earlier models. The Graded Transformer provides a mathematically principled approach to hierarchical learning and neuro-symbolic reasoning. Applications include algebraic geometry (moduli spaces and zeta functions), physics (multiscale systems), natural language processing (syntactic parsing), biological sequence analysis (variant prediction), robotics and autonomous systems (safety-critical prioritization), the automotive industry (certifiable AI for ADAS), and blockchain and financial cryptography (secure coding and structured prediction).

Graded Transformers | SummarXiv | SummarXiv