Dynamical Alignment: A Principle for Adaptive Neural Computation
Xia Chen
Published: 2025/8/13
Abstract
The computational capabilities of a neural network are widely assumed to be determined by its static architecture. Here we challenge this view by establishing that a fixed neural structure can operate in fundamentally different computational modes, driven not by its structure but by the temporal dynamics of its input signals. We term this principle 'Dynamical Alignment'. Applying this principle offers a novel resolution to the long-standing paradox of why brain-inspired spiking neural networks (SNNs) underperform. By encoding static input into controllable dynamical trajectories, we uncover a bimodal optimization landscape with a critical phase transition governed by phase space volume dynamics. A 'dissipative' mode, driven by contracting dynamics, achieves superior energy efficiency through sparse temporal codes. In contrast, an 'expansive' mode, driven by expanding dynamics, unlocks the representational power required for SNNs to match or even exceed their artificial neural network counterparts on diverse tasks, including classification, reinforcement learning, and cognitive integration. We find this computational advantage emerges from a timescale alignment between input dynamics and neuronal integration. This principle, in turn, offers a unified, computable perspective on long-observed dualities in neuroscience, from stability-plasticity dilemma to segregation-integration dynamic. It demonstrates that computation in both biological and artificial systems can be dynamically sculpted by 'software' on fixed 'hardware', pointing toward a potential paradigm shift for AI research: away from designing complex static architectures and toward mastering adaptive, dynamic computation principles.