A Unified Theory of Exact Inference and Learning in Exponential Family Latent Variable Models

Sacha Sokoloski

Published: 2024/4/30

Abstract

Bayes' rule describes how to infer posterior beliefs about latent variables given observations, and inference is a critical step in learning algorithms for latent variable models (LVMs). Although there are exact algorithms for inference and learning for certain LVMs such as linear Gaussian models and mixture models, researchers must typically develop approximate inference and learning algorithms when applying novel LVMs. Here we study the line that separates LVMs that rely on approximation schemes from those that do not, and develop a general theory of exponential family LVMs for which inference and learning may be implemented exactly. Firstly, under mild assumptions about the exponential family form of the LVM, we derive a necessary and sufficient constraint on the parameters of the LVM under which the prior and posterior over the latent variables are in the same exponential family. We then show that a variety of well-known and novel models indeed have this constrained, exponential family form. Finally, we derive generalized inference and learning algorithms for these LVMs, and demonstrate them with a variety of examples. Our unified perspective facilitates both understanding and implementing exact inference and learning algorithms for a wide variety of models, and may guide researchers in the discovery of new models that avoid unnecessary approximations.

A Unified Theory of Exact Inference and Learning in Exponential Family Latent Variable Models | SummarXiv | SummarXiv