CoMET: A Contrastive-Masked Brain Foundation Model for Universal EEG Representation

Ang Li, Zikai Wang, Liuyin Yang, Zhenyu Wang, Tianheng Xu, Honglin Hu, Marc M. Van Hulle

Published: 2025/8/30

Abstract

Electroencephalography (EEG) is a non-invasive technique for recording brain activity, widely used in brain-computer interfaces, clinic, and healthcare. Traditional EEG deep models typically focus on specific dataset and task, limiting model size and generalization. Recently, self-supervised brain foundation models have emerged and been applied to various downstream tasks. Nevertheless, these models still have limitations: current SOTA models typically rely on masked reconstruction strategy; however, EEG features of adjacent channels are highly correlated, which causes the pre-training to overly focus on low-dimensional signal-similarity features in local regions and neglect the global discriminative patterns vital for downstream tasks. To address these limitations, we propose a brain foundation model called CoMET. Specifically, we employ the masked autoencoder with redesigned patching and embedding for EEG as backbone and devise a novel contrastive learning framework with mirror-scale augmentation to strengthen the global discrimination ability. CoMET is pre-trained on mixed EEG datasets over 3000 subjects with over one million samples. It is evaluated on ten different downstream datasets, and the SOTA results demonstrate CoMET's superior ability in extracting universal EEG representations and strong clinical potential.

CoMET: A Contrastive-Masked Brain Foundation Model for Universal EEG Representation | SummarXiv | SummarXiv