Dynamic Vision from EEG Brain Recordings, How much does EEG know?
Prajwal Singh, Anupam Sharma, Pankaj Pandey, Krishna Miyapuram, Shanmuganathan Raman
Published: 2025/5/27
Abstract
Reconstructing dynamic visual stimuli from brain EEG recordings is challenging due to the non-stationary and noisy nature of EEG signals and the limited availability of EEG-video datasets. Prior work has largely focused on static image reconstruction, leaving the open question of whether EEG carries sufficient information for dynamic video decoding. In this work, we present EEGVid, a framework that reconstructs dynamic video stimuli from EEG signals while systematically probing the information they encode. Our approach first learns the EEG representation and then uses these features for video synthesis with a temporally conditioned StyleGAN-ADA that maps EEG embeddings to specific frame positions. Through experiments on three datasets (SEED, EEG-Video Action, SEED-DV), we demonstrate that EEG supports semantically meaningful reconstruction of dynamic visual content, and we quantify \emph{how much EEG knows}: (i) hemispheric asymmetry, with the left hemisphere more predictive of visual content and the right hemisphere of emotional content, (ii) the temporal lobe as the most informative region, and (iii) EEG timesteps 100--300 as the most critical for dynamic visual encoding. Importantly, while generative priors contribute fine spatial detail, EEG provides the semantic and temporal guidance necessary for reconstructing videos that align with the observed stimuli. This positions video generation not as a standalone generative benchmark, but as a means to visualize and validate the representational content of EEG in the context of dynamic vision.