An Accurate EEGNet-based Motor-Imagery Brain-Computer Interface for
Low-Power Edge Computing
- URL: http://arxiv.org/abs/2004.00077v2
- Date: Wed, 29 Apr 2020 06:39:45 GMT
- Title: An Accurate EEGNet-based Motor-Imagery Brain-Computer Interface for
Low-Power Edge Computing
- Authors: Xiaying Wang, Michael Hersche, Batuhan T\"omekce, Burak Kaya, Michele
Magno, Luca Benini
- Abstract summary: This paper presents an accurate and robust embedded motor-imagery brain-computer interface (MI-BCI)
The proposed novel model, based on EEGNet, matches the requirements of memory footprint and computational resources of low-power microcontroller units (MCUs)
The scaled models are deployed on a commercial Cortex-M4F MCU taking 101ms and consuming 4.28mJ per inference for operating the smallest model, and on a Cortex-M7 with 44ms and 18.1mJ per inference for the medium-sized model.
- Score: 13.266626571886354
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents an accurate and robust embedded motor-imagery
brain-computer interface (MI-BCI). The proposed novel model, based on EEGNet,
matches the requirements of memory footprint and computational resources of
low-power microcontroller units (MCUs), such as the ARM Cortex-M family.
Furthermore, the paper presents a set of methods, including temporal
downsampling, channel selection, and narrowing of the classification window, to
further scale down the model to relax memory requirements with negligible
accuracy degradation. Experimental results on the Physionet EEG Motor
Movement/Imagery Dataset show that standard EEGNet achieves 82.43%, 75.07%, and
65.07% classification accuracy on 2-, 3-, and 4-class MI tasks in global
validation, outperforming the state-of-the-art (SoA) convolutional neural
network (CNN) by 2.05%, 5.25%, and 5.48%. Our novel method further scales down
the standard EEGNet at a negligible accuracy loss of 0.31% with 7.6x memory
footprint reduction and a small accuracy loss of 2.51% with 15x reduction. The
scaled models are deployed on a commercial Cortex-M4F MCU taking 101ms and
consuming 4.28mJ per inference for operating the smallest model, and on a
Cortex-M7 with 44ms and 18.1mJ per inference for the medium-sized model,
enabling a fully autonomous, wearable, and accurate low-power BCI.
Related papers
- End-to-End Deep Transfer Learning for Calibration-free Motor Imagery
Brain Computer Interfaces [0.0]
Major issue in Motor Imagery Brain-Computer Interfaces (MI-BCIs) is their poor classification accuracy and the large amount of data that is required for subject-specific calibration.
This study employed deep transfer learning for development of calibration-free subject-independent BCIs.
Three deep learning models (MIN2Net, EEGNet and DeepConvNet) were trained and compared using an openly available dataset.
arXiv Detail & Related papers (2023-07-24T14:24:17Z) - Ultra-low Power Deep Learning-based Monocular Relative Localization
Onboard Nano-quadrotors [64.68349896377629]
This work presents a novel autonomous end-to-end system that addresses the monocular relative localization, through deep neural networks (DNNs), of two peer nano-drones.
To cope with the ultra-constrained nano-drone platform, we propose a vertically-integrated framework, including dataset augmentation, quantization, and system optimizations.
Experimental results show that our DNN can precisely localize a 10cm-size target nano-drone by employing only low-resolution monochrome images, up to 2m distance.
arXiv Detail & Related papers (2023-03-03T14:14:08Z) - Rethinking Mobile Block for Efficient Attention-based Models [60.0312591342016]
This paper focuses on developing modern, efficient, lightweight models for dense predictions while trading off parameters, FLOPs, and performance.
Inverted Residual Block (IRB) serves as the infrastructure for lightweight CNNs, but no counterpart has been recognized by attention-based studies.
We extend CNN-based IRB to attention-based models and abstracting a one-residual Meta Mobile Block (MMB) for lightweight model design.
arXiv Detail & Related papers (2023-01-03T15:11:41Z) - The case for 4-bit precision: k-bit Inference Scaling Laws [75.4335600212427]
Quantization methods reduce the number of bits required to represent each parameter in a model.
The final model size depends on both the number of parameters of the original model and the rate of compression.
We run more than 35,000 zero-shot experiments with 16-bit inputs and k-bit parameters to examine which quantization methods improve scaling for 3 to 8-bit precision.
arXiv Detail & Related papers (2022-12-19T18:48:33Z) - A Two-Stage Efficient 3-D CNN Framework for EEG Based Emotion
Recognition [3.147603836269998]
The framework consists of two stages; the first involves constructing efficient models named EEGNet.
In the second stage, we binarize these models to further compress them and deploy them easily on edge devices.
The proposed binarized EEGNet models achieve accuracies of 81%, 95%, and 99% with storage costs of 0.11Mbits, 0.28Mbits, and 0.46Mbits, respectively.
arXiv Detail & Related papers (2022-07-26T05:33:08Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - EEG-Inception: An Accurate and Robust End-to-End Neural Network for
EEG-based Motor Imagery Classification [123.93460670568554]
This paper proposes a novel convolutional neural network (CNN) architecture for accurate and robust EEG-based motor imagery (MI) classification.
The proposed CNN model, namely EEG-Inception, is built on the backbone of the Inception-Time network.
The proposed network is an end-to-end classification, as it takes the raw EEG signals as the input and does not require complex EEG signal-preprocessing.
arXiv Detail & Related papers (2021-01-24T19:03:10Z) - EEG-TCNet: An Accurate Temporal Convolutional Network for Embedded
Motor-Imagery Brain-Machine Interfaces [15.07343602952606]
We propose EEG-TCNet, a novel temporal convolutional network (TCN) that achieves outstanding accuracy while requiring few trainable parameters.
Its low memory footprint and low computational complexity for inference make it suitable for embedded classification on resource-limited devices at the edge.
arXiv Detail & Related papers (2020-05-31T21:45:45Z) - Q-EEGNet: an Energy-Efficient 8-bit Quantized Parallel EEGNet
Implementation for Edge Motor-Imagery Brain--Machine Interfaces [16.381467082472515]
Motor-Imagery Brain--Machine Interfaces (MI-BMIs)promise direct and accessible communication between human brains and machines.
Deep learning models have emerged for classifying EEG signals.
These models often exceed the limitations of edge devices due to their memory and computational requirements.
arXiv Detail & Related papers (2020-04-24T12:29:03Z) - BitPruning: Learning Bitlengths for Aggressive and Accurate Quantization [57.14179747713731]
We introduce a training method for minimizing inference bitlength at any granularity while maintaining accuracy.
With ImageNet, the method produces an average per layer bitlength of 4.13, 3.76 and 4.36 bits.
arXiv Detail & Related papers (2020-02-08T04:58:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.