Achieving More Human Brain-Like Vision via Human EEG Representational Alignment
Zitong Lu, Yile Wang, Julie D. Golomb
Published: 2024/1/30
Abstract
Despite advancements in artificial intelligence, object recognition models still lag behind in emulating visual information processing in human brains. Recent studies have highlighted the potential of using neural data to mimic brain processing; however, these often rely on invasive neural recordings from non-human subjects, leaving a critical gap in understanding human visual perception. Addressing this gap, we present, 'Re(presentational)Al(ignment)net', a vision model aligned with human brain activity based on non-invasive EEG, demonstrating a significantly higher similarity to human brain representations. Our innovative image-to-brain multi-layer encoding framework advances human neural alignment by optimizing multiple model layers and enabling the model to efficiently learn and mimic the human brain's visual representational patterns across object categories and different modalities. Our findings suggest that ReAlnets better align artificial neural networks with human brain representations, making it more similar to human brain processing than traditional computer vision models, which takes an important step toward bridging the gap between artificial and human vision and achieving more brain-like artificial intelligence systems.