Self-supervised EEG Representation Learning for Automatic Sleep Staging
- URL: http://arxiv.org/abs/2110.15278v1
- Date: Wed, 27 Oct 2021 04:17:27 GMT
- Title: Self-supervised EEG Representation Learning for Automatic Sleep Staging
- Authors: Chaoqi Yang, Danica Xiao, M. Brandon Westover, Jimeng Sun
- Abstract summary: We propose a self-supervised model, named Contrast with the World Representation (ContraWR), for EEG signal representation learning.
ContraWR is evaluated on three real-world EEG datasets that include both at-home and in-lab recording settings.
ContraWR beats supervised learning when fewer training labels are available.
- Score: 26.560516415840965
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Objective: In this paper, we aim to learn robust vector representations from
massive unlabeled Electroencephalogram (EEG) signals, such that the learned
representations (1) are expressive enough to replace the raw signals in the
sleep staging task; and (2) provide better predictive performance than
supervised models in scenarios of fewer labels and noisy samples.
Materials and Methods: We propose a self-supervised model, named Contrast
with the World Representation (ContraWR), for EEG signal representation
learning, which uses global statistics from the dataset to distinguish signals
associated with different sleep stages. The ContraWR model is evaluated on
three real-world EEG datasets that include both at-home and in-lab recording
settings.
Results: ContraWR outperforms recent self-supervised learning methods, MoCo,
SimCLR, BYOL, SimSiam on the sleep staging task across three datasets. ContraWR
also beats supervised learning when fewer training labels are available (e.g.,
4% accuracy improvement when less than 2% data is labeled). Moreover, the model
provides informative representations in 2D projection.
Discussion: The proposed model can be generalized to other unsupervised
physiological signal learning tasks. Future directions include exploring
task-specific data augmentations and combining self-supervised with supervised
methods, building upon the initial success of self-supervised learning in this
paper.
Conclusions: We show that ContraWR is robust to noise and can provide
high-quality EEG representations for downstream prediction tasks. In low-label
scenarios (e.g., only 2% data has labels), ContraWR shows much better
predictive power (e.g., 4% improvement on sleep staging accuracy) than
supervised baselines.
Related papers
- SignBERT+: Hand-model-aware Self-supervised Pre-training for Sign
Language Understanding [132.78015553111234]
Hand gesture serves as a crucial role during the expression of sign language.
Current deep learning based methods for sign language understanding (SLU) are prone to over-fitting due to insufficient sign data resource.
We propose the first self-supervised pre-trainable SignBERT+ framework with model-aware hand prior incorporated.
arXiv Detail & Related papers (2023-05-08T17:16:38Z) - Enhancing Self-Supervised Learning for Remote Sensing with Elevation
Data: A Case Study with Scarce And High Level Semantic Labels [1.534667887016089]
This work proposes a hybrid unsupervised and supervised learning method to pre-train models applied in Earth observation downstream tasks.
We combine a contrastive approach to pre-train models with a pixel-wise regression pre-text task to predict coarse elevation maps.
arXiv Detail & Related papers (2023-04-13T23:01:11Z) - A Benchmark Generative Probabilistic Model for Weak Supervised Learning [2.0257616108612373]
Weak Supervised Learning approaches have been developed to alleviate the annotation burden.
We show that latent variable models (PLVMs) achieve state-of-the-art performance across four datasets.
arXiv Detail & Related papers (2023-03-31T07:06:24Z) - Dual Learning for Large Vocabulary On-Device ASR [64.10124092250128]
Dual learning is a paradigm for semi-supervised machine learning that seeks to leverage unsupervised data by solving two opposite tasks at once.
We provide an analysis of an on-device-sized streaming conformer trained on the entirety of Librispeech, showing relative WER improvements of 10.7%/5.2% without an LM and 11.7%/16.4% with an LM.
arXiv Detail & Related papers (2023-01-11T06:32:28Z) - Vector-Based Data Improves Left-Right Eye-Tracking Classifier
Performance After a Covariate Distributional Shift [0.0]
We propose a fine-grain data approach for EEG-ET data collection in order to create more robust benchmarking.
We train machine learning models utilizing both coarse-grain and fine-grain data and compare their accuracies when tested on data of similar/different distributional patterns.
Results showed that models trained on fine-grain, vector-based data were less susceptible to distributional shifts than models trained on coarse-grain, binary-classified data.
arXiv Detail & Related papers (2022-07-31T16:27:50Z) - Value-Consistent Representation Learning for Data-Efficient
Reinforcement Learning [105.70602423944148]
We propose a novel method, called value-consistent representation learning (VCR), to learn representations that are directly related to decision-making.
Instead of aligning this imagined state with a real state returned by the environment, VCR applies a $Q$-value head on both states and obtains two distributions of action values.
It has been demonstrated that our methods achieve new state-of-the-art performance for search-free RL algorithms.
arXiv Detail & Related papers (2022-06-25T03:02:25Z) - Imposing Consistency for Optical Flow Estimation [73.53204596544472]
Imposing consistency through proxy tasks has been shown to enhance data-driven learning.
This paper introduces novel and effective consistency strategies for optical flow estimation.
arXiv Detail & Related papers (2022-04-14T22:58:30Z) - Investigating Power laws in Deep Representation Learning [4.996066540156903]
We propose a framework to evaluate the quality of representations in unlabelled datasets.
We estimate the coefficient of the power law, $alpha$, across three key attributes which influence representation learning.
Notably, $alpha$ is computable from the representations without knowledge of any labels, thereby offering a framework to evaluate the quality of representations in unlabelled datasets.
arXiv Detail & Related papers (2022-02-11T18:11:32Z) - Self-supervised Contrastive Learning for EEG-based Sleep Staging [29.897104001988748]
We propose a self-supervised contrastive learning method of EEG signals for sleep stage classification.
In detail, the network's performance depends on the choice of transformations and the amount of unlabeled data used in the training process.
arXiv Detail & Related papers (2021-09-16T10:05:33Z) - Negative Data Augmentation [127.28042046152954]
We show that negative data augmentation samples provide information on the support of the data distribution.
We introduce a new GAN training objective where we use NDA as an additional source of synthetic data for the discriminator.
Empirically, models trained with our method achieve improved conditional/unconditional image generation along with improved anomaly detection capabilities.
arXiv Detail & Related papers (2021-02-09T20:28:35Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.