Integrative Variational Autoencoders for Generative Modeling of an Image Outcome with Multiple Input Images
Bowen Lei, Yeseul Jeon, Rajarshi Guhaniyogi, Aaron Scheffler, Bani Mallick, Alzheimer's Disease Neuroimaging Initiatives
Published: 2024/2/5
Abstract
Understanding relationships across multiple imaging modalities is central to neuroimaging research. We introduce the Integrative Variational Autoencoder (InVA), the first hierarchical VAE framework for image-on-image regression in multimodal neuroimaging. Unlike standard VAEs, which are not designed for predictive integration across modalities, InVA models outcome images as functions of both shared and modality-specific features. This flexible, data-driven approach avoids rigid assumptions of classical tensor regression and outperforms conventional VAEs and nonlinear models such as BART. As a key application, InVA accurately predicts costly PET scans from structural MRI, offering an efficient and powerful tool for multimodal neuroimaging.