EENED: End-to-End Neural Epilepsy Detection based on Convolutional
Transformer
- URL: http://arxiv.org/abs/2305.10502v2
- Date: Fri, 8 Sep 2023 05:15:57 GMT
- Title: EENED: End-to-End Neural Epilepsy Detection based on Convolutional
Transformer
- Authors: Chenyu Liu, Xinliang Zhou and Yang Liu
- Abstract summary: Transformer and Convolution neural network (CNN) based models have shown promising results in EEG signal processing.
We propose an end-to-end neural epilepsy detection model, EENED, that combines CNN and Transformer.
- Score: 6.24460694695129
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently Transformer and Convolution neural network (CNN) based models have
shown promising results in EEG signal processing. Transformer models can
capture the global dependencies in EEG signals through a self-attention
mechanism, while CNN models can capture local features such as sawtooth waves.
In this work, we propose an end-to-end neural epilepsy detection model, EENED,
that combines CNN and Transformer. Specifically, by introducing the convolution
module into the Transformer encoder, EENED can learn the time-dependent
relationship of the patient's EEG signal features and notice local EEG abnormal
mutations closely related to epilepsy, such as the appearance of spikes and the
sprinkling of sharp and slow waves. Our proposed framework combines the ability
of Transformer and CNN to capture different scale features of EEG signals and
holds promise for improving the accuracy and reliability of epilepsy detection.
Our source code will be released soon on GitHub.
Related papers
- Synchronized Stepwise Control of Firing and Learning Thresholds in a Spiking Randomly Connected Neural Network toward Hardware Implementation [0.0]
We propose hardware-oriented models of intrinsic plasticity (IP) and synaptic plasticity (SP) for spiking randomly connected neural network (RNN)
We demonstrate the effectiveness of our model through simulations of temporal data learning and anomaly detection with a spiking RNN using publicly available electrocardiograms.
arXiv Detail & Related papers (2024-04-26T08:26:10Z) - EEG-Deformer: A Dense Convolutional Transformer for Brain-computer Interfaces [17.524441950422627]
We introduce EEG-Deformer, which incorporates two main novel components into a CNN-Transformer.
EEG-Deformer learns from neurophysiologically meaningful brain regions for the corresponding cognitive tasks.
arXiv Detail & Related papers (2024-04-25T18:00:46Z) - Focused Decoding Enables 3D Anatomical Detection by Transformers [64.36530874341666]
We propose a novel Detection Transformer for 3D anatomical structure detection, dubbed Focused Decoder.
Focused Decoder leverages information from an anatomical region atlas to simultaneously deploy query anchors and restrict the cross-attention's field of view.
We evaluate our proposed approach on two publicly available CT datasets and demonstrate that Focused Decoder not only provides strong detection results and thus alleviates the need for a vast amount of annotated data but also exhibits exceptional and highly intuitive explainability of results via attention weights.
arXiv Detail & Related papers (2022-07-21T22:17:21Z) - Transformer based Generative Adversarial Network for Liver Segmentation [4.317557160310758]
We propose a new segmentation approach using a hybrid approach combining the Transformer(s) with the Generative Adversarial Network (GAN) approach.
Our model achieved a high dice coefficient of 0.9433, recall of 0.9515, and precision of 0.9376 and outperformed other Transformer based approaches.
arXiv Detail & Related papers (2022-05-21T19:55:43Z) - Grasp-and-Lift Detection from EEG Signal Using Convolutional Neural
Network [1.869097450593631]
This article has automated the hand movement activity viz GAL detection method from the 32-channel EEG signals.
The proposed pipeline essentially combines preprocessing and end-to-end detection steps, eliminating the requirement of hand-crafted feature engineering.
arXiv Detail & Related papers (2022-02-12T19:27:06Z) - DriPP: Driven Point Processes to Model Stimuli Induced Patterns in M/EEG
Signals [62.997667081978825]
We develop a novel statistical point process model-called driven temporal point processes (DriPP)
We derive a fast and principled expectation-maximization (EM) algorithm to estimate the parameters of this model.
Results on standard MEG datasets demonstrate that our methodology reveals event-related neural responses.
arXiv Detail & Related papers (2021-12-08T13:07:21Z) - The Nuts and Bolts of Adopting Transformer in GANs [124.30856952272913]
We investigate the properties of Transformer in the generative adversarial network (GAN) framework for high-fidelity image synthesis.
Our study leads to a new alternative design of Transformers in GAN, a convolutional neural network (CNN)-free generator termed as STrans-G.
arXiv Detail & Related papers (2021-10-25T17:01:29Z) - EEG-GNN: Graph Neural Networks for Classification of
Electroencephalogram (EEG) Signals [20.991468018187362]
Convolutional neural networks (CNN) have been frequently used to extract subject-invariant features from electroencephalogram (EEG)
We overcome this limitation by tailoring the concepts of convolution and pooling applied to 2D grid-like inputs for the functional network of electrode sites.
We develop various graph neural network (GNN) models that project electrodes onto the nodes of a graph, where the node features are represented as EEG channel samples collected over a trial, and nodes can be connected by weighted/unweighted edges.
arXiv Detail & Related papers (2021-06-16T21:19:12Z) - Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation [63.46694853953092]
Swin-Unet is an Unet-like pure Transformer for medical image segmentation.
tokenized image patches are fed into the Transformer-based U-shaped decoder-Decoder architecture.
arXiv Detail & Related papers (2021-05-12T09:30:26Z) - Spatiotemporal Transformer for Video-based Person Re-identification [102.58619642363958]
We show that, despite the strong learning ability, the vanilla Transformer suffers from an increased risk of over-fitting.
We propose a novel pipeline where the model is pre-trained on a set of synthesized video data and then transferred to the downstream domains.
The derived algorithm achieves significant accuracy gain on three popular video-based person re-identification benchmarks.
arXiv Detail & Related papers (2021-03-30T16:19:27Z) - Transformers Solve the Limited Receptive Field for Monocular Depth
Prediction [82.90445525977904]
We propose TransDepth, an architecture which benefits from both convolutional neural networks and transformers.
This is the first paper which applies transformers into pixel-wise prediction problems involving continuous labels.
arXiv Detail & Related papers (2021-03-22T18:00:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.