A Knowledge-Driven Cross-view Contrastive Learning for EEG
Representation
- URL: http://arxiv.org/abs/2310.03747v1
- Date: Thu, 21 Sep 2023 08:53:51 GMT
- Title: A Knowledge-Driven Cross-view Contrastive Learning for EEG
Representation
- Authors: Weining Weng, Yang Gu, Qihui Zhang, Yingying Huang, Chunyan Miao, and
Yiqiang Chen
- Abstract summary: This paper proposes a knowledge-driven cross-view contrastive learning framework (KDC2) to extract effective representations from EEG with limited labels.
The KDC2 method creates scalp and neural views of EEG signals, simulating the internal and external representation of brain activity.
By modeling prior neural knowledge based on neural information consistency theory, the proposed method extracts invariant and complementary neural knowledge to generate combined representations.
- Score: 48.85731427874065
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the abundant neurophysiological information in the
electroencephalogram (EEG) signal, EEG signals integrated with deep learning
methods have gained substantial traction across numerous real-world tasks.
However, the development of supervised learning methods based on EEG signals
has been hindered by the high cost and significant label discrepancies to
manually label large-scale EEG datasets. Self-supervised frameworks are adopted
in vision and language fields to solve this issue, but the lack of EEG-specific
theoretical foundations hampers their applicability across various tasks. To
solve these challenges, this paper proposes a knowledge-driven cross-view
contrastive learning framework (KDC2), which integrates neurological theory to
extract effective representations from EEG with limited labels. The KDC2 method
creates scalp and neural views of EEG signals, simulating the internal and
external representation of brain activity. Sequentially, inter-view and
cross-view contrastive learning pipelines in combination with various
augmentation methods are applied to capture neural features from different
views. By modeling prior neural knowledge based on homologous neural
information consistency theory, the proposed method extracts invariant and
complementary neural knowledge to generate combined representations.
Experimental results on different downstream tasks demonstrate that our method
outperforms state-of-the-art methods, highlighting the superior generalization
of neural knowledge-supported EEG representations across various brain tasks.
Related papers
- Knowledge-Guided Prompt Learning for Lifespan Brain MR Image Segmentation [53.70131202548981]
We present a two-step segmentation framework employing Knowledge-Guided Prompt Learning (KGPL) for brain MRI.
Specifically, we first pre-train segmentation models on large-scale datasets with sub-optimal labels.
The introduction of knowledge-wise prompts captures semantic relationships between anatomical variability and biological processes.
arXiv Detail & Related papers (2024-07-31T04:32:43Z) - Multi-modal Mood Reader: Pre-trained Model Empowers Cross-Subject Emotion Recognition [23.505616142198487]
We develop a Pre-trained model based Multimodal Mood Reader for cross-subject emotion recognition.
The model learns universal latent representations of EEG signals through pre-training on large scale dataset.
Extensive experiments on public datasets demonstrate Mood Reader's superior performance in cross-subject emotion recognition tasks.
arXiv Detail & Related papers (2024-05-28T14:31:11Z) - Learning Robust Deep Visual Representations from EEG Brain Recordings [13.768240137063428]
This study proposes a two-stage method where the first step is to obtain EEG-derived features for robust learning of deep representations.
We demonstrate the generalizability of our feature extraction pipeline across three different datasets using deep-learning architectures.
We propose a novel framework to transform unseen images into the EEG space and reconstruct them with approximation.
arXiv Detail & Related papers (2023-10-25T10:26:07Z) - Evaluating the structure of cognitive tasks with transfer learning [67.22168759751541]
This study investigates the transferability of deep learning representations between different EEG decoding tasks.
We conduct extensive experiments using state-of-the-art decoding models on two recently released EEG datasets.
arXiv Detail & Related papers (2023-07-28T14:51:09Z) - fMRI from EEG is only Deep Learning away: the use of interpretable DL to
unravel EEG-fMRI relationships [68.8204255655161]
We present an interpretable domain grounded solution to recover the activity of several subcortical regions from multichannel EEG data.
We recover individual spatial and time-frequency patterns of scalp EEG predictive of the hemodynamic signal in the subcortical nuclei.
arXiv Detail & Related papers (2022-10-23T15:11:37Z) - Neuro-Symbolic Learning of Answer Set Programs from Raw Data [54.56905063752427]
Neuro-Symbolic AI aims to combine interpretability of symbolic techniques with the ability of deep learning to learn from raw data.
We introduce Neuro-Symbolic Inductive Learner (NSIL), an approach that trains a general neural network to extract latent concepts from raw data.
NSIL learns expressive knowledge, solves computationally complex problems, and achieves state-of-the-art performance in terms of accuracy and data efficiency.
arXiv Detail & Related papers (2022-05-25T12:41:59Z) - EEG-based Cross-Subject Driver Drowsiness Recognition with an
Interpretable Convolutional Neural Network [0.0]
We develop a novel convolutional neural network combined with an interpretation technique that allows sample-wise analysis of important features for classification.
Results show that the model achieves an average accuracy of 78.35% on 11 subjects for leave-one-out cross-subject recognition.
arXiv Detail & Related papers (2021-05-30T14:47:20Z) - A Novel Transferability Attention Neural Network Model for EEG Emotion
Recognition [51.203579838210885]
We propose a transferable attention neural network (TANN) for EEG emotion recognition.
TANN learns the emotional discriminative information by highlighting the transferable EEG brain regions data and samples adaptively.
This can be implemented by measuring the outputs of multiple brain-region-level discriminators and one single sample-level discriminator.
arXiv Detail & Related papers (2020-09-21T02:42:30Z) - Multi-Scale Neural network for EEG Representation Learning in BCI [2.105172041656126]
We propose a novel deep multi-scale neural network that discovers feature representations in multiple frequency/time ranges.
By representing EEG signals withspectral-temporal information, the proposed method can be utilized for diverse paradigms.
arXiv Detail & Related papers (2020-03-02T04:06:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.