Knowledge-guided EEG Representation Learning
- URL: http://arxiv.org/abs/2403.03222v1
- Date: Thu, 15 Feb 2024 01:52:44 GMT
- Title: Knowledge-guided EEG Representation Learning
- Authors: Aditya Kommineni, Kleanthis Avramidis, Richard Leahy, Shrikanth
Narayanan
- Abstract summary: Self-supervised learning has produced impressive results in multimedia domains of audio, vision and speech.
We propose a self-supervised model for EEG, which provides robust performance and remarkable parameter efficiency.
We also propose a novel knowledge-guided pre-training objective that accounts for the idiosyncrasies of the EEG signal.
- Score: 27.8095014391814
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Self-supervised learning has produced impressive results in multimedia
domains of audio, vision and speech. This paradigm is equally, if not more,
relevant for the domain of biosignals, owing to the scarcity of labelled data
in such scenarios. The ability to leverage large-scale unlabelled data to learn
robust representations could help improve the performance of numerous inference
tasks on biosignals. Given the inherent domain differences between multimedia
modalities and biosignals, the established objectives for self-supervised
learning may not translate well to this domain. Hence, there is an unmet need
to adapt these methods to biosignal analysis. In this work we propose a
self-supervised model for EEG, which provides robust performance and remarkable
parameter efficiency by using state space-based deep learning architecture. We
also propose a novel knowledge-guided pre-training objective that accounts for
the idiosyncrasies of the EEG signal. The results indicate improved embedding
representation learning and downstream performance compared to prior works on
exemplary tasks. Also, the proposed objective significantly reduces the amount
of pre-training data required to obtain performance equivalent to prior works.
Related papers
- EEGFormer: Towards Transferable and Interpretable Large-Scale EEG
Foundation Model [39.363511340878624]
We present a novel EEG foundation model, namely EEGFormer, pretrained on large-scale compound EEG data.
To validate the effectiveness of our model, we extensively evaluate it on various downstream tasks and assess the performance under different transfer settings.
arXiv Detail & Related papers (2024-01-11T17:36:24Z) - Self-Supervised Neuron Segmentation with Multi-Agent Reinforcement
Learning [53.00683059396803]
Mask image model (MIM) has been widely used due to its simplicity and effectiveness in recovering original information from masked images.
We propose a decision-based MIM that utilizes reinforcement learning (RL) to automatically search for optimal image masking ratio and masking strategy.
Our approach has a significant advantage over alternative self-supervised methods on the task of neuron segmentation.
arXiv Detail & Related papers (2023-10-06T10:40:46Z) - Learning ECG signal features without backpropagation [0.0]
We propose a novel method to generate representations for time series-type data.
This method relies on ideas from theoretical physics to construct a compact representation in a data-driven way.
We demonstrate the effectiveness of our approach on the task of ECG signal classification, achieving state-of-the-art performance.
arXiv Detail & Related papers (2023-07-04T21:35:49Z) - In-Domain Self-Supervised Learning Improves Remote Sensing Image Scene
Classification [5.323049242720532]
Self-supervised learning has emerged as a promising approach for remote sensing image classification.
We present a study of different self-supervised pre-training strategies and evaluate their effect across 14 downstream datasets.
arXiv Detail & Related papers (2023-07-04T10:57:52Z) - ALP: Action-Aware Embodied Learning for Perception [60.64801970249279]
We introduce Action-Aware Embodied Learning for Perception (ALP)
ALP incorporates action information into representation learning through a combination of optimizing a reinforcement learning policy and an inverse dynamics prediction objective.
We show that ALP outperforms existing baselines in several downstream perception tasks.
arXiv Detail & Related papers (2023-06-16T21:51:04Z) - Improved Speech Emotion Recognition using Transfer Learning and
Spectrogram Augmentation [56.264157127549446]
Speech emotion recognition (SER) is a challenging task that plays a crucial role in natural human-computer interaction.
One of the main challenges in SER is data scarcity.
We propose a transfer learning strategy combined with spectrogram augmentation.
arXiv Detail & Related papers (2021-08-05T10:39:39Z) - DEALIO: Data-Efficient Adversarial Learning for Imitation from
Observation [57.358212277226315]
In imitation learning from observation IfO, a learning agent seeks to imitate a demonstrating agent using only observations of the demonstrated behavior without access to the control signals generated by the demonstrator.
Recent methods based on adversarial imitation learning have led to state-of-the-art performance on IfO problems, but they typically suffer from high sample complexity due to a reliance on data-inefficient, model-free reinforcement learning algorithms.
This issue makes them impractical to deploy in real-world settings, where gathering samples can incur high costs in terms of time, energy, and risk.
We propose a more data-efficient IfO algorithm
arXiv Detail & Related papers (2021-03-31T23:46:32Z) - Domain Adaptive Robotic Gesture Recognition with Unsupervised
Kinematic-Visual Data Alignment [60.31418655784291]
We propose a novel unsupervised domain adaptation framework which can simultaneously transfer multi-modality knowledge, i.e., both kinematic and visual data, from simulator to real robot.
It remedies the domain gap with enhanced transferable features by using temporal cues in videos, and inherent correlations in multi-modal towards recognizing gesture.
Results show that our approach recovers the performance with great improvement gains, up to 12.91% in ACC and 20.16% in F1score without using any annotations in real robot.
arXiv Detail & Related papers (2021-03-06T09:10:03Z) - BENDR: using transformers and a contrastive self-supervised learning
task to learn from massive amounts of EEG data [15.71234837305808]
We consider how to adapt techniques and architectures used for language modelling (LM) to encephalography modelling (EM)
We find that a single pre-trained model is capable of modelling completely novel raw EEG sequences recorded with differing hardware.
Both the internal representations of this model and the entire architecture can be fine-tuned to a variety of downstream BCI and EEG classification tasks.
arXiv Detail & Related papers (2021-01-28T14:54:01Z) - Unsupervised Multi-Modal Representation Learning for Affective Computing
with Multi-Corpus Wearable Data [16.457778420360537]
We propose an unsupervised framework to reduce the reliance on human supervision.
The proposed framework utilizes two stacked convolutional autoencoders to learn latent representations from wearable electrocardiogram (ECG) and electrodermal activity (EDA) signals.
Our method outperforms current state-of-the-art results that have performed arousal detection on the same datasets.
arXiv Detail & Related papers (2020-08-24T22:01:55Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.