Spiking Neural Networks: a theoretical framework for Universal Approximation and training
Umberto Biccari
Published: 2025/9/26
Abstract
Spiking Neural Networks (SNNs) are widely regarded as a biologically-inspired and energy-efficient alternative to classical artificial neural networks. Yet, their theoretical foundations remain only partially understood. In this work, we develop a rigorous mathematical analysis of a representative SNN architecture based on Leaky Integrate-and-Fire (LIF) neurons with threshold-reset dynamics. Our contributions are twofold. First, we establish a universal approximation theorem showing that SNNs can approximate continuous functions on compact domains to arbitrary accuracy. The proof relies on a constructive encoding of target values via spike timing and a careful interplay between idealized $\delta$-driven dynamics and smooth Gaussian-regularized models. Second, we analyze the quantitative behavior of spike times across layers, proving well-posedness of the hybrid dynamics and deriving conditions under which spike counts remain stable, decrease, or in exceptional cases increase due to resonance phenomena or overlapping inputs. Together, these results provide a principled foundation for understanding both the expressive power and the dynamical constraints of SNNs, offering theoretical guarantees for their use in classification and signal processing tasks.