A Variational Framework for Residual-Based Adaptivity in Neural PDE Solvers and Operator Learning

Juan Diego Toscano, Daniel T. Chen, Vivek Oommen, George Em Karniadakis

公開日: 2025/9/17

Abstract

Residual-based adaptive strategies are widely used in scientific machine learning but remain largely heuristic. We introduce a unifying variational framework that formalizes these methods by integrating convex transformations of the residual. Different transformations correspond to distinct objective functionals: exponential weights target the minimization of uniform error, while linear weights recover the minimization of quadratic error. Within this perspective, adaptive weighting is equivalent to selecting sampling distributions that optimize the primal objective, thereby linking discretization choices directly to error metrics. This principled approach yields three benefits: (1) it enables systematic design of adaptive schemes across norms, (2) reduces discretization error through variance reduction of the loss estimator, and (3) enhances learning dynamics by improving the gradient signal-to-noise ratio. Extending the framework to operator learning, we demonstrate substantial performance gains across optimizers and architectures. Our results provide a theoretical justification of residual-based adaptivity and establish a foundation for principled discretization and training strategies.

A Variational Framework for Residual-Based Adaptivity in Neural PDE Solvers and Operator Learning | SummarXiv | SummarXiv