On the choice of optimization norm for Anderson acceleration of the Picard iteration for Navier-Stokes equations
Elizabeth Hawkins, Leo Rebholz
Published: 2025/5/12
Abstract
While the most recent Anderson acceleration (AA) convergence theory [Pollock et al, {\it IMA Num. An.}, 2021] requires that the AA optimization norm match the Hilbert space norm associated with the fixed point operator, in implementations the $\ell^2$ norm is perhaps the most common choice. Unfortunately, so far there is little research done regarding this discrepancy which might reveal when it is fine to use $\ell^2$. To address this issue, we consider AA applied to the Picard iteration for the Navier-Stokes equations (NSE) with varying choices of the AA optimization norm. We first prove a sharpened and generalized convergence estimate for depth $m$ AA-Picard for the NSE with the $H^1_0$ AA optimization norm by using a problem-specific analysis, utilizing a sharper treatment of the nonlinear terms than previous AA-Picard convergence studies, removing a small data assumption, and developing new AA term identities in the NSE nonlinear term estimates. Next, we prove a convergence result for when $L^2$ is used as the AA optimization norm, and this estimate is found to be very similar to that of the $H^1_0$ case. While no analogous theory seems possible for the $\ell^2$ norm, several numerical tests were run to compare AA-Picard convergence with varying choices of AA optimization norm. These tests revealed that convergence behavior was always similar for $L^2$ and $H^1_0$ and {\it usually but not always} similar for $\ell^2$: on a test problem for channel flow past a cylinder with coarser meshes, convergence of AA-Picard using $\ell^2$ performs significantly worse than using $L^2$ and $H^1_0$.