Were Residual Penalty and Neural Operators All We Needed for Solving Optimal Control Problems?

Oliver G. S. Lundqvist, Fabricio Oliveira

Published: 2025/6/5

Abstract

Neural networks have been used to solve optimal control problems, typically by training neural networks using a combined loss function that considers data, differential equation residuals, and objective costs. We show that including cost functions in the training process is unnecessary, advocating for a simpler architecture and streamlined approach by decoupling the optimal control problem from the training process. Thus, our work shows that a simple neural operator architecture, such as DeepONet, coupled with an unconstrained optimization routine, can solve multiple optimal control problems with a single physics-informed training phase and a subsequent optimization phase. We achieve this by adding a penalty term based on the differential equation residual to the cost function and computing gradients with respect to the control using automatic differentiation through the trained neural operator within an iterative optimization routine. Our results show acceptable accuracy for practical applications and potential computational savings for more complex and higher-dimensional problems.

Were Residual Penalty and Neural Operators All We Needed for Solving Optimal Control Problems? | SummarXiv | SummarXiv