Least squares variational inference

Yvann Le Fay, Nicolas Chopin, Simon Barthelmé

Published: 2025/2/5

Abstract

Variational inference consists in finding the best approximation of a target distribution within a certain family, where `best' means (typically) smallest Kullback-Leiber divergence. We show that, when the approximation family is exponential, the best approximation is the solution of a fixed-point equation. We introduce LSVI (Least-Squares Variational Inference), a Monte Carlo variant of the corresponding fixed-point recursion, where each iteration boils down to ordinary least squares regression and does not require computing gradients. We show that LSVI is equivalent to stochastic mirror descent; we use this insight to derive convergence guarantees. We introduce various ideas to improve LSVI further when the approximation family is Gaussian, leading to a $O(d^3)$ complexity in the dimension $d$ of the target in the full-covariance case, and a $O(d)$ complexity in the mean-field case. We show that LSVI outperforms state-of-the-art methods in a range of examples, while remaining gradient-free, that is, it does not require computing gradients.