$i$MIND: Insightful Multi-subject Invariant Neural Decoding
Zixiang Yin, Jiarui Li, Zhengming Ding
Published: 2025/9/22
Abstract
Decoding visual signals holds the tantalizing potential to unravel the complexities of cognition and perception. While recent studies have focused on reconstructing visual stimuli from neural recordings to bridge brain activity with visual imagery, existing methods offer limited insights into the underlying mechanisms of visual processing in the brain. To mitigate this gap, we present an \textit{i}nsightful \textbf{M}ulti-subject \textbf{I}nvariant \textbf{N}eural \textbf{D}ecoding ($i$MIND) model, which employs a novel dual-decoding framework--both biometric and semantic decoding--to offer neural interpretability in a data-driven manner and deepen our understanding of brain-based visual functionalities. Our $i$MIND model operates through three core steps: establishing a shared neural representation space across subjects using a ViT-based masked autoencoder, disentangling neural features into complementary subject-specific and object-specific components, and performing dual decoding to support both biometric and semantic classification tasks. Experimental results demonstrate that $i$MIND achieves state-of-the-art decoding performance with minimal scalability limitations. Furthermore, $i$MIND empirically generates voxel-object activation fingerprints that reveal object-specific neural patterns and enable investigation of subject-specific variations in attention to identical stimuli. These findings provide a foundation for more interpretable and generalizable subject-invariant neural decoding, advancing our understanding of the voxel semantic selectivity as well as the neural vision processing dynamics.