Progressively Sampled Equality-Constrained Optimization
Frank E. Curtis, Lingjun Guo, Daniel P. Robinson
公開日: 2025/10/1
Abstract
An algorithm is proposed, analyzed, and tested for solving continuous nonlinear-equality-constrained optimization problems where the constraints are defined by an expectation or an average over a large (finite) number of terms. The main idea of the algorithm is to solve a sequence of equality-constrained problems, each involving a finite sample of constraint-function terms, over which the sample set grows progressively. Under assumptions about the constraint functions and their first- and second-order derivatives that are reasonable in some real-world settings of interest, it is shown that -- with a sufficiently large initial sample -- solving a sequence of problems defined through progressive sampling yields a better worst-case sample complexity bound compared to solving a single problem with a full set of samples. The results of numerical experiments with a set of test problems demonstrate that the proposed approach can be effective in practice.