An embedding for EEG signals learned using a triplet loss
- URL: http://arxiv.org/abs/2304.06495v1
- Date: Thu, 23 Mar 2023 09:05:20 GMT
- Title: An embedding for EEG signals learned using a triplet loss
- Authors: Pierre Guetschel and Th\'eodore Papadopoulo and Michael Tangermann
- Abstract summary: In a brain-computer interface (BCI), decoded brain state information can be used with minimal time delay.
A challenge in such decoding tasks is posed by the small dataset sizes.
We propose novel domain-specific embeddings for neurophysiological data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neurophysiological time series recordings like the electroencephalogram (EEG)
or local field potentials are obtained from multiple sensors. They can be
decoded by machine learning models in order to estimate the ongoing brain state
of a patient or healthy user. In a brain-computer interface (BCI), this decoded
brain state information can be used with minimal time delay to either control
an application, e.g., for communication or for rehabilitation after stroke, or
to passively monitor the ongoing brain state of the subject, e.g., in a
demanding work environment. A specific challenge in such decoding tasks is
posed by the small dataset sizes in BCI compared to other domains of machine
learning like computer vision or natural language processing. A possibility to
tackle classification or regression problems in BCI despite small training data
sets is through transfer learning, which utilizes data from other sessions,
subjects or even datasets to train a model. In this exploratory study, we
propose novel domain-specific embeddings for neurophysiological data. Our
approach is based on metric learning and builds upon the recently proposed
ladder loss. Using embeddings allowed us to benefit, both from the good
generalisation abilities and robustness of deep learning and from the fast
training of classical machine learning models for subject-specific calibration.
In offline analyses using EEG data of 14 subjects, we tested the embeddings'
feasibility and compared their efficiency with state-of-the-art deep learning
models and conventional machine learning pipelines. In summary, we propose the
use of metric learning to obtain pre-trained embeddings of EEG-BCI data as a
means to incorporate domain knowledge and to reach competitive performance on
novel subjects with minimal calibration requirements.
Related papers
- An Efficient Contrastive Unimodal Pretraining Method for EHR Time Series Data [35.943089444017666]
We propose an efficient method of contrastive pretraining tailored for long clinical timeseries data.
Our model demonstrates the ability to impute missing measurements, providing clinicians with deeper insights into patient conditions.
arXiv Detail & Related papers (2024-10-11T19:05:25Z) - A Spiking Neural Network based on Neural Manifold for Augmenting
Intracortical Brain-Computer Interface Data [5.039813366558306]
Brain-computer interfaces (BCIs) transform neural signals in the brain into in-structions to control external devices.
With the advent of advanced machine learning methods, the capability of brain-computer interfaces has been enhanced like never before.
Here, we use spiking neural networks (SNN) as data generators.
arXiv Detail & Related papers (2022-03-26T15:32:31Z) - BERT WEAVER: Using WEight AVERaging to enable lifelong learning for
transformer-based models in biomedical semantic search engines [49.75878234192369]
We present WEAVER, a simple, yet efficient post-processing method that infuses old knowledge into the new model.
We show that applying WEAVER in a sequential manner results in similar word embedding distributions as doing a combined training on all data at once.
arXiv Detail & Related papers (2022-02-21T10:34:41Z) - 2021 BEETL Competition: Advancing Transfer Learning for Subject
Independence & Heterogenous EEG Data Sets [89.84774119537087]
We design two transfer learning challenges around diagnostics and Brain-Computer-Interfacing (BCI)
Task 1 is centred on medical diagnostics, addressing automatic sleep stage annotation across subjects.
Task 2 is centred on Brain-Computer Interfacing (BCI), addressing motor imagery decoding across both subjects and data sets.
arXiv Detail & Related papers (2022-02-14T12:12:20Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z) - Closed-Loop Neural Interfaces with Embedded Machine Learning [12.977151652608047]
We review the recent developments in embedding machine learning in neural interfaces.
We present our optimized tree-based model for low-power and memory-efficient classification of neural signal in brain implants.
Using energy-aware learning and model compression, we show that the proposed oblique trees can outperform conventional machine learning models in applications such as seizure or tremor detection and motor decoding.
arXiv Detail & Related papers (2020-10-15T14:34:08Z) - Uncovering the structure of clinical EEG signals with self-supervised
learning [64.4754948595556]
Supervised learning paradigms are often limited by the amount of labeled data that is available.
This phenomenon is particularly problematic in clinically-relevant data, such as electroencephalography (EEG)
By extracting information from unlabeled data, it might be possible to reach competitive performance with deep neural networks.
arXiv Detail & Related papers (2020-07-31T14:34:47Z) - Transfer Learning for EEG-Based Brain-Computer Interfaces: A Review of
Progress Made Since 2016 [35.68916211292525]
A brain-computer interface (BCI) enables a user to communicate with a computer directly using brain signals.
EEG is sensitive to noise/artifact and suffers between-subject/within-subject non-stationarity.
It is difficult to build a generic pattern recognition model in an EEG-based BCI system that is optimal for different subjects.
arXiv Detail & Related papers (2020-04-13T16:44:55Z) - EEG-based Brain-Computer Interfaces (BCIs): A Survey of Recent Studies
on Signal Sensing Technologies and Computational Intelligence Approaches and
their Applications [65.32004302942218]
Brain-Computer Interface (BCI) is a powerful communication tool between users and systems.
Recent technological advances have increased interest in electroencephalographic (EEG) based BCI for translational and healthcare applications.
arXiv Detail & Related papers (2020-01-28T10:36:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.