Loss-Transformation Invariance in the Damped Newton Method
Alexander Shestakov, Sushil Bohara, Samuel Horváth, Martin Takáč, Slavomír Hanzely
公開日: 2025/9/30
Abstract
The Newton method is a powerful optimization algorithm, valued for its rapid local convergence and elegant geometric properties. However, its theoretical guarantees are usually limited to convex problems. In this work, we ask whether convexity is truly necessary. We introduce the concept of loss-transformation invariance, showing that damped Newton methods are unaffected by monotone transformations of the loss - apart from a simple rescaling of the step size. This insight allows difficult losses to be replaced with easier transformed versions, enabling convexification of many nonconvex problems while preserving the same sequence of iterates. Our analysis also explains the effectiveness of unconventional stepsizes in Newton's method, including values greater than one and even negative steps.