ScatterFormer: Locally-Invariant Scattering Transformer for
Patient-Independent Multispectral Detection of Epileptiform Discharges
- URL: http://arxiv.org/abs/2304.14919v1
- Date: Wed, 26 Apr 2023 10:10:58 GMT
- Title: ScatterFormer: Locally-Invariant Scattering Transformer for
Patient-Independent Multispectral Detection of Epileptiform Discharges
- Authors: Ruizhe Zheng, Jun Li, Yi Wang, Tian Luo, Yuguo Yu
- Abstract summary: We propose an invariant scattering transform-based hierarchical Transformer that specifically pays attention to subtle features.
In particular, the disentangled frequency-aware attention (FAA) enables the Transformer to capture clinically informative high-frequency components.
Our proposed model achieves median AUCROC and accuracy of 98.14%, 96.39% in patients with Rolandic epilepsy.
- Score: 7.726017342725144
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Patient-independent detection of epileptic activities based on visual
spectral representation of continuous EEG (cEEG) has been widely used for
diagnosing epilepsy. However, precise detection remains a considerable
challenge due to subtle variabilities across subjects, channels and time
points. Thus, capturing fine-grained, discriminative features of EEG patterns,
which is associated with high-frequency textural information, is yet to be
resolved. In this work, we propose Scattering Transformer (ScatterFormer), an
invariant scattering transform-based hierarchical Transformer that specifically
pays attention to subtle features. In particular, the disentangled
frequency-aware attention (FAA) enables the Transformer to capture clinically
informative high-frequency components, offering a novel clinical explainability
based on visual encoding of multichannel EEG signals. Evaluations on two
distinct tasks of epileptiform detection demonstrate the effectiveness our
method. Our proposed model achieves median AUCROC and accuracy of 98.14%,
96.39% in patients with Rolandic epilepsy. On a neonatal seizure detection
benchmark, it outperforms the state-of-the-art by 9% in terms of average
AUCROC.
Related papers
- Multiscale Latent Diffusion Model for Enhanced Feature Extraction from Medical Images [5.395912799904941]
variations in CT scanner models and acquisition protocols introduce significant variability in the extracted radiomic features.
LTDiff++ is a multiscale latent diffusion model designed to enhance feature extraction in medical imaging.
arXiv Detail & Related papers (2024-10-05T02:13:57Z) - StealthDiffusion: Towards Evading Diffusion Forensic Detection through Diffusion Model [62.25424831998405]
StealthDiffusion is a framework that modifies AI-generated images into high-quality, imperceptible adversarial examples.
It is effective in both white-box and black-box settings, transforming AI-generated images into high-quality adversarial forgeries.
arXiv Detail & Related papers (2024-08-11T01:22:29Z) - SincVAE: a New Approach to Improve Anomaly Detection on EEG Data Using SincNet and Variational Autoencoder [0.0]
This work proposes a semi-supervised approach for detecting epileptic seizures from EEG data, utilizing a novel Deep Learning-based method called SincVAE.
Results indicate that SincVAE improves seizure detection in EEG data and is capable of identifying early seizures during the preictal stage as well as monitoring patients throughout the postictal stage.
arXiv Detail & Related papers (2024-06-25T13:21:01Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - Lightweight Convolution Transformer for Cross-patient Seizure Detection
in Multi-channel EEG Signals [0.0]
This study proposes a novel deep learning architecture based lightweight convolution transformer (LCT)
The transformer is able to learn spatial and temporal correlated information simultaneously from the multi-channel electroencephalogram (EEG) signal to detect seizures at smaller segment lengths.
arXiv Detail & Related papers (2023-05-07T16:43:52Z) - The role of noise in denoising models for anomaly detection in medical
images [62.0532151156057]
Pathological brain lesions exhibit diverse appearance in brain images.
Unsupervised anomaly detection approaches have been proposed using only normal data for training.
We show that optimization of the spatial resolution and magnitude of the noise improves the performance of different model training regimes.
arXiv Detail & Related papers (2023-01-19T21:39:38Z) - Unsupervised Multivariate Time-Series Transformers for Seizure
Identification on EEG [9.338549413542948]
Epileptic seizures are commonly monitored through electroencephalogram (EEG) recordings.
We present an unsupervised transformer-based model for seizure identification on raw EEG.
We train an autoencoder involving a transformer encoder via an unsupervised loss function, incorporating a novel masking strategy.
arXiv Detail & Related papers (2023-01-03T15:57:13Z) - EEG-Based Epileptic Seizure Prediction Using Temporal Multi-Channel
Transformers [1.0970480513577103]
Epilepsy is one of the most common neurological diseases, characterized by transient and unprovoked events called epileptic seizures.
EEG is an auxiliary method used to perform both the diagnosis and the monitoring of epilepsy.
Given the unexpected nature of an epileptic seizure, its prediction would improve patient care, optimizing the quality of life and the treatment of epilepsy.
arXiv Detail & Related papers (2022-09-18T03:03:47Z) - Data-Efficient Vision Transformers for Multi-Label Disease
Classification on Chest Radiographs [55.78588835407174]
Vision Transformers (ViTs) have not been applied to this task despite their high classification performance on generic images.
ViTs do not rely on convolutions but on patch-based self-attention and in contrast to CNNs, no prior knowledge of local connectivity is present.
Our results show that while the performance between ViTs and CNNs is on par with a small benefit for ViTs, DeiTs outperform the former if a reasonably large data set is available for training.
arXiv Detail & Related papers (2022-08-17T09:07:45Z) - Task-oriented Self-supervised Learning for Anomaly Detection in
Electroencephalography [51.45515911920534]
A task-oriented self-supervised learning approach is proposed to train a more effective anomaly detector.
A specific two branch convolutional neural network with larger kernels is designed as the feature extractor.
The effectively designed and trained feature extractor has shown to be able to extract better feature representations from EEGs.
arXiv Detail & Related papers (2022-07-04T13:15:08Z) - Multiple Time Series Fusion Based on LSTM An Application to CAP A Phase
Classification Using EEG [56.155331323304]
Deep learning based electroencephalogram channels' feature level fusion is carried out in this work.
Channel selection, fusion, and classification procedures were optimized by two optimization algorithms.
arXiv Detail & Related papers (2021-12-18T14:17:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.