Efficacy of Transformer Networks for Classification of Raw EEG Data
- URL: http://arxiv.org/abs/2202.05170v1
- Date: Tue, 8 Feb 2022 17:12:27 GMT
- Title: Efficacy of Transformer Networks for Classification of Raw EEG Data
- Authors: Gourav Siddhad, Anmol Gupta, Debi Prosad Dogra, Partha Pratim Roy
- Abstract summary: Classifying electroencephalogram (EEG) data has been challenging.
Deep learning has not yet been accomplished for EEG.
Results indicate that the transformer-based deep learning models can successfully abate the need for heavy feature-extraction of EEG data.
- Score: 21.156545825932405
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the unprecedented success of transformer networks in natural language
processing (NLP), recently, they have been successfully adapted to areas like
computer vision, generative adversarial networks (GAN), and reinforcement
learning. Classifying electroencephalogram (EEG) data has been challenging and
researchers have been overly dependent on pre-processing and hand-crafted
feature extraction. Despite having achieved automated feature extraction in
several other domains, deep learning has not yet been accomplished for EEG. In
this paper, the efficacy of the transformer network for the classification of
raw EEG data (cleaned and pre-processed) is explored. The performance of
transformer networks was evaluated on a local (age and gender data) and a
public dataset (STEW). First, a classifier using a transformer network is built
to classify the age and gender of a person with raw resting-state EEG data.
Second, the classifier is tuned for mental workload classification with open
access raw multi-tasking mental workload EEG data (STEW). The network achieves
an accuracy comparable to state-of-the-art accuracy on both the local (Age and
Gender dataset; 94.53% (gender) and 87.79% (age)) and the public (STEW dataset;
95.28% (two workload levels) and 88.72% (three workload levels)) dataset. The
accuracy values have been achieved using raw EEG data without feature
extraction. Results indicate that the transformer-based deep learning models
can successfully abate the need for heavy feature-extraction of EEG data for
successful classification.
Related papers
- Importance-Aware Adaptive Dataset Distillation [53.79746115426363]
Development of deep learning models is enabled by the availability of large-scale datasets.
dataset distillation aims to synthesize a compact dataset that retains the essential information from the large original dataset.
We propose an importance-aware adaptive dataset distillation (IADD) method that can improve distillation performance.
arXiv Detail & Related papers (2024-01-29T03:29:39Z) - Amplifying Pathological Detection in EEG Signaling Pathways through
Cross-Dataset Transfer Learning [10.212217551908525]
We study the effectiveness of data and model scaling and cross-dataset knowledge transfer in a real-world pathology classification task.
We identify the challenges of possible negative transfer and emphasize the significance of some key components.
Our findings indicate a small and generic model (e.g. ShallowNet) performs well on a single dataset, however, a larger model (e.g. TCN) performs better on transfer and learning from a larger and diverse dataset.
arXiv Detail & Related papers (2023-09-19T20:09:15Z) - Towards Data-Efficient Detection Transformers [77.43470797296906]
We show most detection transformers suffer from significant performance drops on small-size datasets.
We empirically analyze the factors that affect data efficiency, through a step-by-step transition from a data-efficient RCNN variant to the representative DETR.
We introduce a simple yet effective label augmentation method to provide richer supervision and improve data efficiency.
arXiv Detail & Related papers (2022-03-17T17:56:34Z) - GANSER: A Self-supervised Data Augmentation Framework for EEG-based
Emotion Recognition [15.812231441367022]
We propose a novel data augmentation framework, namely Generative Adversarial Network-based Self-supervised Data Augmentation (GANSER)
As the first to combine adversarial training with self-supervised learning for EEG-based emotion recognition, the proposed framework can generate high-quality simulated EEG samples.
A transformation function is employed to mask parts of EEG signals and force the generator to synthesize potential EEG signals based on the remaining parts.
arXiv Detail & Related papers (2021-09-07T14:42:55Z) - Transformer Networks for Data Augmentation of Human Physical Activity
Recognition [61.303828551910634]
State of the art models like Recurrent Generative Adrial Networks (RGAN) are used to generate realistic synthetic data.
In this paper, transformer based generative adversarial networks which have global attention on data, are compared on PAMAP2 and Real World Human Activity Recognition data sets with RGAN.
arXiv Detail & Related papers (2021-09-02T16:47:29Z) - Deep Transformer Networks for Time Series Classification: The NPP Safety
Case [59.20947681019466]
An advanced temporal neural network referred to as the Transformer is used within a supervised learning fashion to model the time-dependent NPP simulation data.
The Transformer can learn the characteristics of the sequential data and yield promising performance with approximately 99% classification accuracy on the testing dataset.
arXiv Detail & Related papers (2021-04-09T14:26:25Z) - TELESTO: A Graph Neural Network Model for Anomaly Classification in
Cloud Services [77.454688257702]
Machine learning (ML) and artificial intelligence (AI) are applied on IT system operation and maintenance.
One direction aims at the recognition of re-occurring anomaly types to enable remediation automation.
We propose a method that is invariant to dimensionality changes of given data.
arXiv Detail & Related papers (2021-02-25T14:24:49Z) - ScalingNet: extracting features from raw EEG data for emotion
recognition [4.047737925426405]
We propose a novel convolutional layer allowing to adaptively extract effective data-driven spectrogram-like features from raw EEG signals.
The proposed neural network architecture based on the scaling layer, references as ScalingNet, has achieved the state-of-the-art result across the established DEAP benchmark dataset.
arXiv Detail & Related papers (2021-02-07T08:54:27Z) - EEG-Inception: An Accurate and Robust End-to-End Neural Network for
EEG-based Motor Imagery Classification [123.93460670568554]
This paper proposes a novel convolutional neural network (CNN) architecture for accurate and robust EEG-based motor imagery (MI) classification.
The proposed CNN model, namely EEG-Inception, is built on the backbone of the Inception-Time network.
The proposed network is an end-to-end classification, as it takes the raw EEG signals as the input and does not require complex EEG signal-preprocessing.
arXiv Detail & Related papers (2021-01-24T19:03:10Z) - Data Augmentation for Enhancing EEG-based Emotion Recognition with Deep
Generative Models [13.56090099952884]
We propose three methods for augmenting EEG training data to enhance the performance of emotion recognition models.
For the full usage strategy, all of the generated data are augmented to the training dataset without judging the quality of the generated data.
The experimental results demonstrate that the augmented training datasets produced by our methods enhance the performance of EEG-based emotion recognition models.
arXiv Detail & Related papers (2020-06-04T21:23:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.