Convergence analysis of equilibrium methods for inverse problems

Daniel Obmann, Gyeongha Hwang, Markus Haltmeier

Published: 2023/6/2

Abstract

Solving inverse problems \(Ax = y\) is central to a variety of practically important fields such as medical imaging, remote sensing, and non-destructive testing. The most successful and theoretically best-understood method is convex variational regularization, where approximate but stable solutions are defined as minimizers of \( \|A(\cdot) - y^\delta\|^2 / 2 + \alpha \mathcal{R}(\cdot)\), with \(\mathcal{R}\) a regularization functional. Recent methods such as deep equilibrium models and plug-and-play approaches, however, go beyond variational regularization. Motivated by these innovations, we introduce implicit non-variational (INV) regularization, where approximate solutions are defined as solutions of \(A^*(A x - y^\delta) + \alpha R(x) = 0\) for some regularization operator \(R\). When the regularization operator is the gradient of a functional, INV reduces to classical variational regularization. However, in methods like DEQ and PnP, \(R\) is not a gradient field, and the existing theoretical foundation remains incomplete. To address this, we establish stability and convergence results in this broader setting, including convergence rates and stability estimates measured via a absolute Bregman distance.

Convergence analysis of equilibrium methods for inverse problems | SummarXiv | SummarXiv