Overcoming Output Dimension Collapse: How Sparsity Enables Zero-shot Brain-to-Image Reconstruction at Small Data Scales

Kenya Otsuka, Yoshihiro Nagano, Yukiyasu Kamitani

Published: 2025/9/19

Abstract

Advances in brain-to-image reconstruction are enabling us to externalize the subjective visual experiences encoded in the brain as images. Achieving such reconstruction with limited training data requires generalization beyond the training set, a task known as zero-shot prediction. Despite its importance, we still lack theoretical guidelines for achieving efficient and accurate reconstruction. In this paper, we provide a theoretical analysis of two widely used models for translating brain activity to latent image features. We define the data scale as the ratio of the number of training samples to the latent feature dimensionality and characterize how each model behaves across data scales. We first show that the naive linear regression model, which uses a shared set of input variables for all outputs, suffers from "output dimension collapse" at small data scales, restricting generalization beyond the training data. We then mathematically characterize the prediction error of the sparse linear regression model by deriving formulas linking prediction error with data scale and other problem parameters. Leveraging the sparsity of the brain-to-feature mapping, this approach enables accurate zero-shot prediction even at small data scales without trapping in output dimension collapse. Our results provide a theoretical guideline for achieving zero-shot reconstruction and highlight the benefits of variable selection in brain-to-image reconstruction.

Overcoming Output Dimension Collapse: How Sparsity Enables Zero-shot Brain-to-Image Reconstruction at Small Data Scales | SummarXiv | SummarXiv