Nyström-Accelerated Primal LS-SVMs: Breaking the $O(an^3)$ Complexity Bottleneck for Scalable ODEs Learning

Weikuo Wang, Yue Liao, Huan Luo

Published: 2025/10/5

Abstract

A major problem of kernel-based methods (e.g., least squares support vector machines, LS-SVMs) for solving linear/nonlinear ordinary differential equations (ODEs) is the prohibitive $O(an^3)$ ($a=1$ for linear ODEs and 27 for nonlinear ODEs) part of their computational complexity with increasing temporal discretization points $n$. We propose a novel Nystr\"om-accelerated LS-SVMs framework that breaks this bottleneck by reformulating ODEs as primal-space constraints. Specifically, we derive for the first time an explicit Nystr\"om-based mapping and its derivatives from one-dimensional temporal discretization points to a higher $m$-dimensional feature space ($1< m\le n$), enabling the learning process to solve linear/nonlinear equation systems with $m$-dependent complexity. Numerical experiments on sixteen benchmark ODEs demonstrate: 1) $10-6000$ times faster computation than classical LS-SVMs and physics-informed neural networks (PINNs), 2) comparable accuracy to LS-SVMs ($<0.13\%$ relative MAE, RMSE, and $\left \| y-\hat{y} \right \| _{\infty } $difference) while maximum surpassing PINNs by 72\% in RMSE, and 3) scalability to $n=10^4$ time steps with $m=50$ features. This work establishes a new paradigm for efficient kernel-based ODEs learning without significantly sacrificing the accuracy of the solution.

Nyström-Accelerated Primal LS-SVMs: Breaking the $O(an^3)$ Complexity Bottleneck for Scalable ODEs Learning | SummarXiv | SummarXiv