Real-time EEG-based Emotion Recognition using Discrete Wavelet
Transforms on Full and Reduced Channel Signals
- URL: http://arxiv.org/abs/2110.05635v1
- Date: Mon, 11 Oct 2021 22:28:43 GMT
- Title: Real-time EEG-based Emotion Recognition using Discrete Wavelet
Transforms on Full and Reduced Channel Signals
- Authors: Josef Bajada and Francesco Borg Bonello
- Abstract summary: Real-time EEG-based Emotion Recognition (EEG-ER) with consumer-grade EEG devices involves classification of emotions using a reduced number of channels.
These devices typically provide only four or five channels, unlike the high number of channels typically used in most current state-of-the-art research.
We propose to use Discrete Wavelet Transforms (DWT) to extract time-frequency domain features, and we use time-windows of a few seconds to perform EEG-ER classification.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Real-time EEG-based Emotion Recognition (EEG-ER) with consumer-grade EEG
devices involves classification of emotions using a reduced number of channels.
These devices typically provide only four or five channels, unlike the high
number of channels (32 or more) typically used in most current state-of-the-art
research. In this work we propose to use Discrete Wavelet Transforms (DWT) to
extract time-frequency domain features, and we use time-windows of a few
seconds to perform EEG-ER classification. This technique can be used in
real-time, as opposed to post-hoc on the full session data. We also apply
baseline removal preprocessing, developed in prior research, to our proposed
DWT Entropy and Energy features, which improves classification accuracy
significantly. We consider two different classifier architectures, a 3D
Convolutional Neural Network (3D CNN) and a Support Vector Machine (SVM). We
evaluate both models on subject-independent and subject dependent setups to
classify the Valence and Arousal dimensions of an individual's emotional state.
We test them on both the full 32-channel data provided by the DEAP dataset, and
also a reduced 5-channel extract of the same dataset. The SVM model performs
best on all the presented scenarios, achieving an accuracy of 95.32% on Valence
and 95.68% on Arousal for the full 32-channel subject-dependent case, beating
prior real-time EEG-ER subject-dependent benchmarks. On the subject-independent
case an accuracy of 80.70% on Valence and 81.41% on Arousal was also obtained.
Reducing the input data to 5 channels only degrades the accuracy by an average
of 3.54% across all scenarios, making this model appropriate for use with more
accessible low-end EEG devices.
Related papers
- CEReBrO: Compact Encoder for Representations of Brain Oscillations Using Efficient Alternating Attention [53.539020807256904]
We introduce a Compact for Representations of Brain Oscillations using alternating attention (CEReBrO)
Our tokenization scheme represents EEG signals at a per-channel patch.
We propose an alternating attention mechanism that jointly models intra-channel temporal dynamics and inter-channel spatial correlations, achieving 2x speed improvement with 6x less memory required compared to standard self-attention.
arXiv Detail & Related papers (2025-01-18T21:44:38Z) - CwA-T: A Channelwise AutoEncoder with Transformer for EEG Abnormality Detection [0.4448543797168715]
CwA-T is a novel framework that combines a channelwise CNN-based autoencoder with a single-head transformer classifier for efficient EEG abnormality detection.
evaluated on the TUH Abnormal EEG Corpus, the proposed model achieves 85.0% accuracy, 76.2% sensitivity, and 91.2% specificity at the per-case level.
The framework retains interpretability through its channelwise design, demonstrating great potential for future applications in neuroscience research and clinical practice.
arXiv Detail & Related papers (2024-12-19T04:38:34Z) - Dual-TSST: A Dual-Branch Temporal-Spectral-Spatial Transformer Model for EEG Decoding [2.0721229324537833]
We propose a novel decoding architecture network with a dual-branch temporal-spectral-spatial transformer (Dual-TSST)
Our proposed Dual-TSST performs superiorly in various tasks, which achieves the promising EEG classification performance of average accuracy of 80.67%.
This study provides a new approach to high-performance EEG decoding, and has great potential for future CNN-Transformer based applications.
arXiv Detail & Related papers (2024-09-05T05:08:43Z) - Hourglass Tokenizer for Efficient Transformer-Based 3D Human Pose Estimation [73.31524865643709]
We present a plug-and-play pruning-and-recovering framework, called Hourglass Tokenizer (HoT), for efficient transformer-based 3D pose estimation from videos.
Our HoDT begins with pruning pose tokens of redundant frames and ends with recovering full-length tokens, resulting in a few pose tokens in the intermediate transformer blocks.
Our method can achieve both high efficiency and estimation accuracy compared to the original VPT models.
arXiv Detail & Related papers (2023-11-20T18:59:51Z) - DGSD: Dynamical Graph Self-Distillation for EEG-Based Auditory Spatial
Attention Detection [49.196182908826565]
Auditory Attention Detection (AAD) aims to detect target speaker from brain signals in a multi-speaker environment.
Current approaches primarily rely on traditional convolutional neural network designed for processing Euclidean data like images.
This paper proposes a dynamical graph self-distillation (DGSD) approach for AAD, which does not require speech stimuli as input.
arXiv Detail & Related papers (2023-09-07T13:43:46Z) - Multi-Tier Platform for Cognizing Massive Electroencephalogram [6.100405014798822]
An end-to-end platform is built for precisely cognizing brain activities.
A spiking neural network (SNN) based tier is designed to distill the principle information in terms of spike-streams from the rare features.
The proposed tier-3 transposes time- and space-domain of spike patterns from the SNN; and feeds the transposed pattern-matrices into an artificial neural network (ANN, Transformer specifically)
arXiv Detail & Related papers (2022-04-21T01:27:58Z) - Robust learning from corrupted EEG with dynamic spatial filtering [68.82260713085522]
Building machine learning models using EEG recorded outside of the laboratory requires robust methods to noisy data and randomly missing channels.
We propose dynamic spatial filtering (DSF), a multi-head attention module that can be plugged in before the first layer of a neural network.
We tested DSF on public EEG data encompassing 4,000 recordings with simulated channel corruption and on a private dataset of 100 at-home recordings of mobile EEG with natural corruption.
arXiv Detail & Related papers (2021-05-27T02:33:16Z) - EEG-Inception: An Accurate and Robust End-to-End Neural Network for
EEG-based Motor Imagery Classification [123.93460670568554]
This paper proposes a novel convolutional neural network (CNN) architecture for accurate and robust EEG-based motor imagery (MI) classification.
The proposed CNN model, namely EEG-Inception, is built on the backbone of the Inception-Time network.
The proposed network is an end-to-end classification, as it takes the raw EEG signals as the input and does not require complex EEG signal-preprocessing.
arXiv Detail & Related papers (2021-01-24T19:03:10Z) - Convolutional Neural Networks for Automatic Detection of Artifacts from
Independent Components Represented in Scalp Topographies of EEG Signals [9.088303226909279]
Artifacts, due to eye movements and blink, muscular/cardiac activity and generic electrical disturbances, have to be recognized and eliminated.
ICA is effective to split the signal into independent components (ICs) whose re-projections on 2D scalp topographies (images) allow to recognize/separate artifacts and by UBS.
We present a completely automatic and effective framework for EEG artifact recognition by IC topoplots, based on 2D Convolutional Neural Networks (CNNs)
Experiments have shown an overall accuracy of above 98%, employing 1.4 sec on a standard PC to classify 32 topoplots
arXiv Detail & Related papers (2020-09-08T12:40:10Z) - End-to-End Multi-speaker Speech Recognition with Transformer [88.22355110349933]
We replace the RNN-based encoder-decoder in the speech recognition model with a Transformer architecture.
We also modify the self-attention component to be restricted to a segment rather than the whole sequence in order to reduce computation.
arXiv Detail & Related papers (2020-02-10T16:29:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.