Approximation Analysis of the Entropic Penalty in Quadratic Programming
Venkatkrishna Karumanchi, Gabriel Rioux, Ziv Goldfeld
公開日: 2025/9/24
Abstract
Quadratic assignment problems are a fundamental class of combinatorial optimization problems which are ubiquitous in applications, yet their exact resolution is NP-hard. To circumvent this impasse, it was proposed to regularize such problems via an entropic penalty, leading to computationally tractable proxies. Indeed, this enabled efficient algorithms, notably in the context of Gromov-Wasserstein (GW) problems, but it is unknown how well solutions of the regularized problem approximate those of the original one for small regularization parameters. Treating the broader framework of general quadratic programs (QPs), we establish that the approximation gap decays exponentially quickly for concave QPs, while the rate for general indefinite or convex QPs can be as slow as linear. Our analysis builds on the study of the entropic penalty in linear programming by leveraging a new representation for concave QPs, which connects them to a family of linear programs with varying costs. Building on these results, we design an algorithm which, given a local solution of the entropic QP, returns a candidate minimizer of the original QP and certifies it. We apply these findings to a general class of discrete GW problems, yielding new variational forms and the first exponentially vanishing entropic approximation bound in the GW literature.