Few-Shot Relation Learning with Attention for EEG-based Motor Imagery
Classification
- URL: http://arxiv.org/abs/2003.01300v2
- Date: Wed, 19 Aug 2020 06:15:53 GMT
- Title: Few-Shot Relation Learning with Attention for EEG-based Motor Imagery
Classification
- Authors: Sion An, Soopil Kim, Philip Chikontwe and Sang Hyun Park
- Abstract summary: Brain-Computer Interfaces (BCI) based on Electroencephalography (EEG) signals have received a lot of attention.
Motor imagery (MI) data can be used to aid rehabilitation as well as in autonomous driving scenarios.
classification of MI signals is vital for EEG-based BCI systems.
- Score: 11.873435088539459
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Brain-Computer Interfaces (BCI) based on Electroencephalography (EEG)
signals, in particular motor imagery (MI) data have received a lot of attention
and show the potential towards the design of key technologies both in
healthcare and other industries. MI data is generated when a subject imagines
movement of limbs and can be used to aid rehabilitation as well as in
autonomous driving scenarios. Thus, classification of MI signals is vital for
EEG-based BCI systems. Recently, MI EEG classification techniques using deep
learning have shown improved performance over conventional techniques. However,
due to inter-subject variability, the scarcity of unseen subject data, and low
signal-to-noise ratio, extracting robust features and improving accuracy is
still challenging. In this context, we propose a novel two-way few shot network
that is able to efficiently learn how to learn representative features of
unseen subject categories and how to classify them with limited MI EEG data.
The pipeline includes an embedding module that learns feature representations
from a set of samples, an attention mechanism for key signal feature discovery,
and a relation module for final classification based on relation scores between
a support set and a query signal. In addition to the unified learning of
feature similarity and a few shot classifier, our method leads to emphasize
informative features in support data relevant to the query data, which
generalizes better on unseen subjects. For evaluation, we used the BCI
competition IV 2b dataset and achieved an 9.3% accuracy improvement in the
20-shot classification task with state-of-the-art performance. Experimental
results demonstrate the effectiveness of employing attention and the overall
generality of our method.
Related papers
- Feature Selection via Dynamic Graph-based Attention Block in MI-based EEG Signals [0.0]
Brain-computer interface (BCI) technology enables direct interaction between humans and computers by analyzing brain signals.
EEG signals are often affected by a low signal-to-noise ratio, physiological artifacts, and individual variability, representing challenges in extracting distinct features.
Also, motor imagery (MI)-based EEG signals could contain features with low correlation to MI characteristics, which might cause the weights of the deep model to become biased towards those features.
arXiv Detail & Related papers (2024-10-31T00:53:29Z) - Quantifying Spatial Domain Explanations in BCI using Earth Mover's Distance [6.038190786160174]
BCIs facilitate unique communication between humans and computers, benefiting severely disabled individuals.
It's crucial to assess and explain BCI performance, offering clear explanations for potential users to avoid frustration when it doesn't work as expected.
arXiv Detail & Related papers (2024-05-02T13:35:15Z) - DGSD: Dynamical Graph Self-Distillation for EEG-Based Auditory Spatial
Attention Detection [49.196182908826565]
Auditory Attention Detection (AAD) aims to detect target speaker from brain signals in a multi-speaker environment.
Current approaches primarily rely on traditional convolutional neural network designed for processing Euclidean data like images.
This paper proposes a dynamical graph self-distillation (DGSD) approach for AAD, which does not require speech stimuli as input.
arXiv Detail & Related papers (2023-09-07T13:43:46Z) - The Overlooked Classifier in Human-Object Interaction Recognition [82.20671129356037]
We encode the semantic correlation among classes into the classification head by initializing the weights with language embeddings of HOIs.
We propose a new loss named LSE-Sign to enhance multi-label learning on a long-tailed dataset.
Our simple yet effective method enables detection-free HOI classification, outperforming the state-of-the-arts that require object detection and human pose by a clear margin.
arXiv Detail & Related papers (2022-03-10T23:35:00Z) - 2021 BEETL Competition: Advancing Transfer Learning for Subject
Independence & Heterogenous EEG Data Sets [89.84774119537087]
We design two transfer learning challenges around diagnostics and Brain-Computer-Interfacing (BCI)
Task 1 is centred on medical diagnostics, addressing automatic sleep stage annotation across subjects.
Task 2 is centred on Brain-Computer Interfacing (BCI), addressing motor imagery decoding across both subjects and data sets.
arXiv Detail & Related papers (2022-02-14T12:12:20Z) - How Knowledge Graph and Attention Help? A Quantitative Analysis into
Bag-level Relation Extraction [66.09605613944201]
We quantitatively evaluate the effect of attention and Knowledge Graph on bag-level relation extraction (RE)
We find that (1) higher attention accuracy may lead to worse performance as it may harm the model's ability to extract entity mention features; (2) the performance of attention is largely influenced by various noise distribution patterns; and (3) KG-enhanced attention indeed improves RE performance, while not through enhanced attention but by incorporating entity prior.
arXiv Detail & Related papers (2021-07-26T09:38:28Z) - Subject Independent Emotion Recognition using EEG Signals Employing
Attention Driven Neural Networks [2.76240219662896]
A novel deep learning framework capable of doing subject-independent emotion recognition is presented.
A convolutional neural network (CNN) with attention framework is presented for performing the task.
The proposed approach has been validated using publicly available datasets.
arXiv Detail & Related papers (2021-06-07T09:41:15Z) - CNN-based Approaches For Cross-Subject Classification in Motor Imagery:
From The State-of-The-Art to DynamicNet [0.2936007114555107]
Motor imagery (MI)-based brain-computer interface (BCI) systems are being increasingly employed to provide alternative means of communication and control.
accurately classifying MI from brain signals is essential to obtain reliable BCI systems.
Deep learning approaches have started to emerge as valid alternatives to standard machine learning techniques.
arXiv Detail & Related papers (2021-05-17T14:57:13Z) - EEG-Inception: An Accurate and Robust End-to-End Neural Network for
EEG-based Motor Imagery Classification [123.93460670568554]
This paper proposes a novel convolutional neural network (CNN) architecture for accurate and robust EEG-based motor imagery (MI) classification.
The proposed CNN model, namely EEG-Inception, is built on the backbone of the Inception-Time network.
The proposed network is an end-to-end classification, as it takes the raw EEG signals as the input and does not require complex EEG signal-preprocessing.
arXiv Detail & Related papers (2021-01-24T19:03:10Z) - Uncovering the structure of clinical EEG signals with self-supervised
learning [64.4754948595556]
Supervised learning paradigms are often limited by the amount of labeled data that is available.
This phenomenon is particularly problematic in clinically-relevant data, such as electroencephalography (EEG)
By extracting information from unlabeled data, it might be possible to reach competitive performance with deep neural networks.
arXiv Detail & Related papers (2020-07-31T14:34:47Z) - Deep Feature Mining via Attention-based BiLSTM-GCN for Human Motor
Imagery Recognition [9.039355687614076]
This paper presents a novel deep learning approach designed towards remarkably accurate and responsive motor imagery (MI) recognition based on scalp EEG.
BiLSTM with the Attention mechanism manages to derive relevant features from raw EEG signals.
The 0.4-second detection framework has shown effective and efficient prediction based on individual and group-wise training, with 98.81% and 94.64% accuracy, respectively.
arXiv Detail & Related papers (2020-05-02T10:03:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.