Error analysis for learning the time-stepping operator of evolutionary PDEs

Ke Chen, Meenakshi Krishnan, Haizhao Yang

公開日: 2025/9/4

Abstract

Deep neural networks (DNNs) have recently emerged as effective tools for approximating solution operators of partial differential equations (PDEs) including evolutionary problems. Classical numerical solvers for such PDEs often face challenges of balancing stability constraints and the high computational cost of iterative solvers. In contrast, DNNs offer a data-driven alternative through direct learning of time-stepping operators to achieve this balancing goal. In this work, we provide a rigorous theoretical framework for analyzing the approximation of these operators using feedforward neural networks (FNNs). We derive explicit error estimates that characterize the dependence of the approximation error on the network architecture -- namely its width and depth -- as well as the number of training samples. Furthermore, we establish Lipschitz continuity properties of time-stepping operators associated with classical numerical schemes and identify low-complexity structures inherent in these operators for several classes of PDEs, including reaction-diffusion equations, parabolic equations with external forcing, and scalar conservation laws. Leveraging these structural insights, we obtain generalization bounds that demonstrate efficient learnability without incurring the curse of dimensionality. Finally, we extend our analysis from single-input operator learning to a general multi-input setting, thereby broadening the applicability of our results.

Error analysis for learning the time-stepping operator of evolutionary PDEs | SummarXiv | SummarXiv