Introducing the method of ellipcenters, a new first order technique for unconstrained optimization

Roger Behling, Ramyro Aquines Correa, Eduarda Ferreira Zanatta, Vincent Guigues

Published: 2025/9/18

Abstract

In this paper, we introduce the Method of Ellipcenters (ME) for unconstrained minimization. At the cost of two gradients per iteration and a line search, we compute the next iterate by setting it as the center of an elliptical interpolation. The idea behind the ellipse built in each step is to emulate the original level curve of the objective function constrained to a suitable two-dimensional affine space, which is determined by the current iterate and two appropriate gradient vectors. We present the method for general unconstrained minimization and carry out a convergence analysis for the case where the objective function is quadratic. In this context, ME enjoys linear convergence with the rate being at least as good as the linear rate of the steepest descent (gradient) method with optimal step. In our experiments, however, ME was much faster than the gradient method with optimal step size. Moreover, ME seems highly competitive in comparison to several well established algorithms, including Nesterov's accelerated gradient, Barzilai-Borwein, and conjugate gradient. The efficiency in terms of both time and number of iterations is stressed even more for ill-conditioned problems. A theoretical feature that might be a reason for this is that ME coincides with Newton for quadratic programs in two-dimensional Euclidean spaces, solving them in one single step. In our numerical tests, convergence in one iteration only was also observed for much larger problem sizes.