Ultra Efficient Transfer Learning with Meta Update for Cross Subject EEG
Classification
- URL: http://arxiv.org/abs/2003.06113v3
- Date: Mon, 1 Mar 2021 08:43:37 GMT
- Title: Ultra Efficient Transfer Learning with Meta Update for Cross Subject EEG
Classification
- Authors: Tiehang Duan, Mihir Chauhan, Mohammad Abuzar Shaikh, Jun Chu, Sargur
Srihari
- Abstract summary: We propose an efficient transfer learning method, named Meta UPdate Strategy (MUPS-EEG) for continuous EEG classification across different subjects.
The model learns effective representations with meta update which accelerates adaptation on new subject and mitigates forgetting of knowledge on previous subjects.
- Score: 0.6562256987706128
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The pattern of Electroencephalogram (EEG) signal differs significantly across
different subjects, and poses challenge for EEG classifiers in terms of 1)
effectively adapting a learned classifier onto a new subject, 2) retaining
knowledge of known subjects after the adaptation. We propose an efficient
transfer learning method, named Meta UPdate Strategy (MUPS-EEG), for continuous
EEG classification across different subjects. The model learns effective
representations with meta update which accelerates adaptation on new subject
and mitigate forgetting of knowledge on previous subjects at the same time. The
proposed mechanism originates from meta learning and works to 1) find feature
representation that is broadly suitable for different subjects, 2) maximizes
sensitivity of loss function for fast adaptation on new subject. The method can
be applied to all deep learning oriented models. Extensive experiments on two
public datasets demonstrate the effectiveness of the proposed model,
outperforming current state of the arts by a large margin in terms of both
adapting on new subject and retain knowledge of learned subjects.
Related papers
- A new training approach for text classification in Mental Health: LatentGLoss [0.0]
This study presents a multi-stage approach to mental health classification by leveraging machine learning algorithms, deep learning architectures, and transformer-based models.
The core contribution of this study lies in a novel training strategy involving a dual-model architecture composed of a teacher and a student network.
The experimental results highlight the effectiveness of each modeling stage and demonstrate that the proposed loss function and teacher-student interaction significantly enhance the model's learning capacity in mental health prediction tasks.
arXiv Detail & Related papers (2025-04-09T19:34:31Z) - CSTA: Spatial-Temporal Causal Adaptive Learning for Exemplar-Free Video Class-Incremental Learning [62.69917996026769]
A class-incremental learning task requires learning and preserving both spatial appearance and temporal action involvement.
We propose a framework that equips separate adapters to learn new class patterns, accommodating the incremental information requirements unique to each class.
A causal compensation mechanism is proposed to reduce the conflicts during increment and memorization for between different types of information.
arXiv Detail & Related papers (2025-01-13T11:34:55Z) - EEG-Based Mental Imagery Task Adaptation via Ensemble of Weight-Decomposed Low-Rank Adapters [8.238310169506944]
We propose a novel ensemble of weight-decomposed low-rank adaptation methods, EDoRA, for parameter-efficient mental imagery task adaptation.
The proposed method has performed better than full fine-tune and state-of-the-art PEFT methods for mental imagery classification.
arXiv Detail & Related papers (2024-12-08T14:57:00Z) - Adaptive Rentention & Correction for Continual Learning [114.5656325514408]
A common problem in continual learning is the classification layer's bias towards the most recent task.
We name our approach Adaptive Retention & Correction (ARC)
ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets.
arXiv Detail & Related papers (2024-05-23T08:43:09Z) - Latent Alignment with Deep Set EEG Decoders [44.128689862889715]
We introduce the Latent Alignment method that won the Benchmarks for EEG Transfer Learning competition.
We present its formulation as a deep set applied on the set of trials from a given subject.
Our experimental results show that performing statistical distribution alignment at later stages in a deep learning model is beneficial to the classification accuracy.
arXiv Detail & Related papers (2023-11-29T12:40:45Z) - Taxonomy Adaptive Cross-Domain Adaptation in Medical Imaging via
Optimization Trajectory Distillation [73.83178465971552]
The success of automated medical image analysis depends on large-scale and expert-annotated training sets.
Unsupervised domain adaptation (UDA) has been raised as a promising approach to alleviate the burden of labeled data collection.
We propose optimization trajectory distillation, a unified approach to address the two technical challenges from a new perspective.
arXiv Detail & Related papers (2023-07-27T08:58:05Z) - EEG-NeXt: A Modernized ConvNet for The Classification of Cognitive
Activity from EEG [0.0]
One of the main challenges in electroencephalogram (EEG) based brain-computer interface (BCI) systems is learning the subject/session invariant features to classify cognitive activities.
We propose a novel end-to-end machine learning pipeline, EEG-NeXt, which facilitates transfer learning.
arXiv Detail & Related papers (2022-12-08T10:15:52Z) - Data augmentation for learning predictive models on EEG: a systematic
comparison [79.84079335042456]
deep learning for electroencephalography (EEG) classification tasks has been rapidly growing in the last years.
Deep learning for EEG classification tasks has been limited by the relatively small size of EEG datasets.
Data augmentation has been a key ingredient to obtain state-of-the-art performances across applications such as computer vision or speech.
arXiv Detail & Related papers (2022-06-29T09:18:15Z) - DcnnGrasp: Towards Accurate Grasp Pattern Recognition with Adaptive
Regularizer Learning [13.08779945306727]
Current state-of-the-art methods ignore category information of objects which is crucial for grasp pattern recognition.
This paper presents a novel dual-branch convolutional neural network (DcnnGrasp) to achieve joint learning of object category classification and grasp pattern recognition.
arXiv Detail & Related papers (2022-05-11T00:34:27Z) - 2021 BEETL Competition: Advancing Transfer Learning for Subject
Independence & Heterogenous EEG Data Sets [89.84774119537087]
We design two transfer learning challenges around diagnostics and Brain-Computer-Interfacing (BCI)
Task 1 is centred on medical diagnostics, addressing automatic sleep stage annotation across subjects.
Task 2 is centred on Brain-Computer Interfacing (BCI), addressing motor imagery decoding across both subjects and data sets.
arXiv Detail & Related papers (2022-02-14T12:12:20Z) - Guiding Generative Language Models for Data Augmentation in Few-Shot
Text Classification [59.698811329287174]
We leverage GPT-2 for generating artificial training instances in order to improve classification performance.
Our results show that fine-tuning GPT-2 in a handful of label instances leads to consistent classification improvements.
arXiv Detail & Related papers (2021-11-17T12:10:03Z) - BENDR: using transformers and a contrastive self-supervised learning
task to learn from massive amounts of EEG data [15.71234837305808]
We consider how to adapt techniques and architectures used for language modelling (LM) to encephalography modelling (EM)
We find that a single pre-trained model is capable of modelling completely novel raw EEG sequences recorded with differing hardware.
Both the internal representations of this model and the entire architecture can be fine-tuned to a variety of downstream BCI and EEG classification tasks.
arXiv Detail & Related papers (2021-01-28T14:54:01Z) - Towards Accurate Knowledge Transfer via Target-awareness Representation
Disentanglement [56.40587594647692]
We propose a novel transfer learning algorithm, introducing the idea of Target-awareness REpresentation Disentanglement (TRED)
TRED disentangles the relevant knowledge with respect to the target task from the original source model and used as a regularizer during fine-tuning the target model.
Experiments on various real world datasets show that our method stably improves the standard fine-tuning by more than 2% in average.
arXiv Detail & Related papers (2020-10-16T17:45:08Z) - Few-Shot Relation Learning with Attention for EEG-based Motor Imagery
Classification [11.873435088539459]
Brain-Computer Interfaces (BCI) based on Electroencephalography (EEG) signals have received a lot of attention.
Motor imagery (MI) data can be used to aid rehabilitation as well as in autonomous driving scenarios.
classification of MI signals is vital for EEG-based BCI systems.
arXiv Detail & Related papers (2020-03-03T02:34:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.