Unpicking Data at the Seams: Understanding Disentanglement in VAEs
Carl Allen
Published: 2024/10/29
Abstract
A generative latent variable model is said to be disentangled when varying a single latent co-ordinate changes a single aspect of samples generated, e.g. object position or facial expression in an image. Related phenomena are seen in several generative paradigms, including state-of-the-art diffusion models, but disentanglement is most notably observed in Variational Autoencoders (VAEs), where oft-used diagonal posterior covariances are argued to be the cause. We make this picture precise. From a known exact link between optimal Gaussian posteriors and decoder derivatives, we show how diagonal posteriors "lock" a decoder's local axes so that density over the data manifold factorises along independent one-dimensional seams that map to axis-aligned directions in latent space. This gives a clear definition of disentanglement, explains why it emerges in VAEs and shows that, under stated assumptions, ground truth factors are identifiable even with a symmetric prior.