Improving self-supervised pretraining models for epileptic seizure
detection from EEG data
- URL: http://arxiv.org/abs/2207.06911v1
- Date: Tue, 28 Jun 2022 17:15:49 GMT
- Title: Improving self-supervised pretraining models for epileptic seizure
detection from EEG data
- Authors: Sudip Das, Pankaj Pandey, and Krishna Prasad Miyapuram
- Abstract summary: This paper presents various self-supervision strategies to enhance the performance of a time-series based Diffusion convolution neural network (DCRNN) model.
The learned weights in the self-supervision pretraining phase can be transferred to the supervised training phase to boost the model's prediction capability.
- Score: 0.23624125155742057
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There is abundant medical data on the internet, most of which are unlabeled.
Traditional supervised learning algorithms are often limited by the amount of
labeled data, especially in the medical domain, where labeling is costly in
terms of human processing and specialized experts needed to label them. They
are also prone to human error and biased as a select few expert annotators
label them. These issues are mitigated by Self-supervision, where we generate
pseudo-labels from unlabelled data by seeing the data itself. This paper
presents various self-supervision strategies to enhance the performance of a
time-series based Diffusion convolution recurrent neural network (DCRNN) model.
The learned weights in the self-supervision pretraining phase can be
transferred to the supervised training phase to boost the model's prediction
capability. Our techniques are tested on an extension of a Diffusion
Convolutional Recurrent Neural network (DCRNN) model, an RNN with graph
diffusion convolutions, which models the spatiotemporal dependencies present in
EEG signals. When the learned weights from the pretraining stage are
transferred to a DCRNN model to determine whether an EEG time window has a
characteristic seizure signal associated with it, our method yields an AUROC
score $1.56\%$ than the current state-of-the-art models on the TUH EEG seizure
corpus.
Related papers
- Synthesizing Multimodal Electronic Health Records via Predictive Diffusion Models [69.06149482021071]
We propose a novel EHR data generation model called EHRPD.
It is a diffusion-based model designed to predict the next visit based on the current one while also incorporating time interval estimation.
We conduct experiments on two public datasets and evaluate EHRPD from fidelity, privacy, and utility perspectives.
arXiv Detail & Related papers (2024-06-20T02:20:23Z) - Diffusion-Model-Assisted Supervised Learning of Generative Models for
Density Estimation [10.793646707711442]
We present a framework for training generative models for density estimation.
We use the score-based diffusion model to generate labeled data.
Once the labeled data are generated, we can train a simple fully connected neural network to learn the generative model in the supervised manner.
arXiv Detail & Related papers (2023-10-22T23:56:19Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - Brain-Inspired Spiking Neural Network for Online Unsupervised Time
Series Prediction [13.521272923545409]
We present a novel Continuous Learning-based Unsupervised Recurrent Spiking Neural Network Model (CLURSNN)
CLURSNN makes online predictions by reconstructing the underlying dynamical system using Random Delay Embedding.
We show that the proposed online time series prediction methodology outperforms state-of-the-art DNN models when predicting an evolving Lorenz63 dynamical system.
arXiv Detail & Related papers (2023-04-10T16:18:37Z) - on the effectiveness of generative adversarial network on anomaly
detection [1.6244541005112747]
GANs rely on the rich contextual information of these models to identify the actual training distribution.
We suggest a new unsupervised model based on GANs --a combination of an autoencoder and a GAN.
A new scoring function was introduced to target anomalies where a linear combination of the internal representation of the discriminator and the generator's visual representation, plus the encoded representation of the autoencoder, come together to define the proposed anomaly score.
arXiv Detail & Related papers (2021-12-31T16:35:47Z) - Multiple Organ Failure Prediction with Classifier-Guided Generative
Adversarial Imputation Networks [4.040013871160853]
Multiple organ failure (MOF) is a severe syndrome with a high mortality rate among Intensive Care Unit (ICU) patients.
Applying machine learning models to electronic health records is a challenge due to the pervasiveness of missing values.
arXiv Detail & Related papers (2021-06-22T15:49:01Z) - Automated Seizure Detection and Seizure Type Classification From
Electroencephalography With a Graph Neural Network and Self-Supervised
Pre-Training [5.770965725405472]
We propose modeling EEGs as graphs and present a graph neural network for automated seizure detection and classification.
Our graph model with self-supervised pre-training significantly outperforms previous state-of-the-art CNN and Long Short-Term Memory (LSTM) models.
arXiv Detail & Related papers (2021-04-16T20:32:10Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - TELESTO: A Graph Neural Network Model for Anomaly Classification in
Cloud Services [77.454688257702]
Machine learning (ML) and artificial intelligence (AI) are applied on IT system operation and maintenance.
One direction aims at the recognition of re-occurring anomaly types to enable remediation automation.
We propose a method that is invariant to dimensionality changes of given data.
arXiv Detail & Related papers (2021-02-25T14:24:49Z) - Uncovering the structure of clinical EEG signals with self-supervised
learning [64.4754948595556]
Supervised learning paradigms are often limited by the amount of labeled data that is available.
This phenomenon is particularly problematic in clinically-relevant data, such as electroencephalography (EEG)
By extracting information from unlabeled data, it might be possible to reach competitive performance with deep neural networks.
arXiv Detail & Related papers (2020-07-31T14:34:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.