A Two-Stage Efficient 3-D CNN Framework for EEG Based Emotion
Recognition
- URL: http://arxiv.org/abs/2208.00883v1
- Date: Tue, 26 Jul 2022 05:33:08 GMT
- Title: A Two-Stage Efficient 3-D CNN Framework for EEG Based Emotion
Recognition
- Authors: Ye Qiao, Mohammed Alnemari, Nader Bagherzadeh
- Abstract summary: The framework consists of two stages; the first involves constructing efficient models named EEGNet.
In the second stage, we binarize these models to further compress them and deploy them easily on edge devices.
The proposed binarized EEGNet models achieve accuracies of 81%, 95%, and 99% with storage costs of 0.11Mbits, 0.28Mbits, and 0.46Mbits, respectively.
- Score: 3.147603836269998
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a novel two-stage framework for emotion recognition using
EEG data that outperforms state-of-the-art models while keeping the model size
small and computationally efficient. The framework consists of two stages; the
first stage involves constructing efficient models named EEGNet, which is
inspired by the state-of-the-art efficient architecture and employs
inverted-residual blocks that contain depthwise separable convolutional layers.
The EEGNet models on both valence and arousal labels achieve the average
classification accuracy of 90%, 96.6%, and 99.5% with only 6.4k, 14k, and 25k
parameters, respectively. In terms of accuracy and storage cost, these models
outperform the previous state-of-the-art result by up to 9%. In the second
stage, we binarize these models to further compress them and deploy them easily
on edge devices. Binary Neural Networks (BNNs) typically degrade model
accuracy. We improve the EEGNet binarized models in this paper by introducing
three novel methods and achieving a 20\% improvement over the baseline binary
models. The proposed binarized EEGNet models achieve accuracies of 81%, 95%,
and 99% with storage costs of 0.11Mbits, 0.28Mbits, and 0.46Mbits,
respectively. Those models help deploy a precise human emotion recognition
system on the edge environment.
Related papers
- A-SDM: Accelerating Stable Diffusion through Redundancy Removal and
Performance Optimization [54.113083217869516]
In this work, we first explore the computational redundancy part of the network.
We then prune the redundancy blocks of the model and maintain the network performance.
Thirdly, we propose a global-regional interactive (GRI) attention to speed up the computationally intensive attention part.
arXiv Detail & Related papers (2023-12-24T15:37:47Z) - Yin Yang Convolutional Nets: Image Manifold Extraction by the Analysis
of Opposites [1.1560177966221703]
Yin Yang Convolutional Network is an architecture that extracts visual manifold.
Our first model reached 93.32% test accuracy, 0.8% more than the older SOTA in this category.
We also performed an analysis on ImageNet, where we reached 66.49% validation accuracy with 1.6M parameters.
arXiv Detail & Related papers (2023-10-24T19:48:07Z) - Rethinking Mobile Block for Efficient Attention-based Models [60.0312591342016]
This paper focuses on developing modern, efficient, lightweight models for dense predictions while trading off parameters, FLOPs, and performance.
Inverted Residual Block (IRB) serves as the infrastructure for lightweight CNNs, but no counterpart has been recognized by attention-based studies.
We extend CNN-based IRB to attention-based models and abstracting a one-residual Meta Mobile Block (MMB) for lightweight model design.
arXiv Detail & Related papers (2023-01-03T15:11:41Z) - Elastic-Link for Binarized Neural Network [9.83865304744923]
"Elastic-Link" (EL) module enrich information flow within a BNN by adaptively adding real-valued input features to the subsequent convolutional output features.
EL produces a significant improvement on the challenging large-scale ImageNet dataset.
With the integration of ReActNet, it yields a new state-of-the-art result of 71.9% top-1 accuracy.
arXiv Detail & Related papers (2021-12-19T13:49:29Z) - Multistage Pruning of CNN Based ECG Classifiers for Edge Devices [9.223908421919733]
Convolutional neural network (CNN) based deep learning has been used successfully to detect anomalous beats in ECG.
The computational complexity of existing CNN models prohibits them from being implemented in low-powered edge devices.
This paper presents a novel multistage pruning technique that reduces CNN model complexity with negligible loss in performance.
arXiv Detail & Related papers (2021-08-31T17:51:15Z) - Effective Model Sparsification by Scheduled Grow-and-Prune Methods [73.03533268740605]
We propose a novel scheduled grow-and-prune (GaP) methodology without pre-training the dense models.
Experiments have shown that such models can match or beat the quality of highly optimized dense models at 80% sparsity on a variety of tasks.
arXiv Detail & Related papers (2021-06-18T01:03:13Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - EEG-Inception: An Accurate and Robust End-to-End Neural Network for
EEG-based Motor Imagery Classification [123.93460670568554]
This paper proposes a novel convolutional neural network (CNN) architecture for accurate and robust EEG-based motor imagery (MI) classification.
The proposed CNN model, namely EEG-Inception, is built on the backbone of the Inception-Time network.
The proposed network is an end-to-end classification, as it takes the raw EEG signals as the input and does not require complex EEG signal-preprocessing.
arXiv Detail & Related papers (2021-01-24T19:03:10Z) - Towards Practical Lipreading with Distilled and Efficient Models [57.41253104365274]
Lipreading has witnessed a lot of progress due to the resurgence of neural networks.
Recent works have placed emphasis on aspects such as improving performance by finding the optimal architecture or improving generalization.
There is still a significant gap between the current methodologies and the requirements for an effective deployment of lipreading in practical scenarios.
We propose a series of innovations that significantly bridge that gap: first, we raise the state-of-the-art performance by a wide margin on LRW and LRW-1000 to 88.5% and 46.6%, respectively using self-distillation.
arXiv Detail & Related papers (2020-07-13T16:56:27Z) - An Accurate EEGNet-based Motor-Imagery Brain-Computer Interface for
Low-Power Edge Computing [13.266626571886354]
This paper presents an accurate and robust embedded motor-imagery brain-computer interface (MI-BCI)
The proposed novel model, based on EEGNet, matches the requirements of memory footprint and computational resources of low-power microcontroller units (MCUs)
The scaled models are deployed on a commercial Cortex-M4F MCU taking 101ms and consuming 4.28mJ per inference for operating the smallest model, and on a Cortex-M7 with 44ms and 18.1mJ per inference for the medium-sized model.
arXiv Detail & Related papers (2020-03-31T19:52:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.