Transfer Learning of Deep Spatiotemporal Networks to Model Arbitrarily
Long Videos of Seizures
- URL: http://arxiv.org/abs/2106.12014v1
- Date: Tue, 22 Jun 2021 18:40:31 GMT
- Title: Transfer Learning of Deep Spatiotemporal Networks to Model Arbitrarily
Long Videos of Seizures
- Authors: Fernando P\'erez-Garc\'ia, Catherine Scott, Rachel Sparks, Beate Diehl
and S\'ebastien Ourselin
- Abstract summary: Detailed analysis of seizure semiology is critical for management of epilepsy patients.
We present GESTURES, a novel architecture combining convolutional neural networks (CNNs) and recurrent neural networks (RNNs)
We show that an STCNN trained on a HAR dataset can be used in combination with an RNN to accurately represent arbitrarily long videos of seizures.
- Score: 58.720142291102135
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Detailed analysis of seizure semiology, the symptoms and signs which occur
during a seizure, is critical for management of epilepsy patients. Inter-rater
reliability using qualitative visual analysis is often poor for semiological
features. Therefore, automatic and quantitative analysis of video-recorded
seizures is needed for objective assessment.
We present GESTURES, a novel architecture combining convolutional neural
networks (CNNs) and recurrent neural networks (RNNs) to learn deep
representations of arbitrarily long videos of epileptic seizures.
We use a spatiotemporal CNN (STCNN) pre-trained on large human action
recognition (HAR) datasets to extract features from short snippets (approx. 0.5
s) sampled from seizure videos. We then train an RNN to learn seizure-level
representations from the sequence of features.
We curated a dataset of seizure videos from 68 patients and evaluated
GESTURES on its ability to classify seizures into focal onset seizures (FOSs)
(N = 106) vs. focal to bilateral tonic-clonic seizures (TCSs) (N = 77),
obtaining an accuracy of 98.9% using bidirectional long short-term memory
(BLSTM) units.
We demonstrate that an STCNN trained on a HAR dataset can be used in
combination with an RNN to accurately represent arbitrarily long videos of
seizures. GESTURES can provide accurate seizure classification by modeling
sequences of semiologies.
Related papers
- How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - A Meta-GNN approach to personalized seizure detection and classification [53.906130332172324]
We propose a personalized seizure detection and classification framework that quickly adapts to a specific patient from limited seizure samples.
We train a Meta-GNN based classifier that learns a global model from a set of training patients.
We show that our method outperforms the baselines by reaching 82.7% on accuracy and 82.08% on F1 score after only 20 iterations on new unseen patients.
arXiv Detail & Related papers (2022-11-01T14:12:58Z) - N-Omniglot: a Large-scale Neuromorphic Dataset for Spatio-Temporal
Sparse Few-shot Learning [10.812738608234321]
We provide the first neuromorphic dataset: N- Omniglot, using the Dynamic Vision Sensor (DVS)
It contains 1623 categories of handwritten characters, with only 20 samples per class.
The dataset provides a powerful challenge and a suitable benchmark for developing SNNs algorithm in the few-shot learning domain.
arXiv Detail & Related papers (2021-12-25T12:41:34Z) - SOUL: An Energy-Efficient Unsupervised Online Learning Seizure Detection
Classifier [68.8204255655161]
Implantable devices that record neural activity and detect seizures have been adopted to issue warnings or trigger neurostimulation to suppress seizures.
For an implantable seizure detection system, a low power, at-the-edge, online learning algorithm can be employed to dynamically adapt to neural signal drifts.
SOUL was fabricated in TSMC's 28 nm process occupying 0.1 mm2 and achieves 1.5 nJ/classification energy efficiency, which is at least 24x more efficient than state-of-the-art.
arXiv Detail & Related papers (2021-10-01T23:01:20Z) - An End-to-End Deep Learning Approach for Epileptic Seizure Prediction [4.094649684498489]
We propose an end-to-end deep learning solution using a convolutional neural network (CNN)
Overall sensitivity, false prediction rate, and area under receiver operating characteristic curve reaches 93.5%, 0.063/h, 0.981 and 98.8%, 0.074/h, 0.988 on two datasets respectively.
arXiv Detail & Related papers (2021-08-17T05:49:43Z) - MARL: Multimodal Attentional Representation Learning for Disease
Prediction [0.0]
Existing learning models often utilise CT-scan images to predict lung diseases.
These models are posed by high uncertainties that affect lung segmentation and visual feature learning.
We introduce MARL, a novel Multimodal Attentional Representation Learning model architecture.
arXiv Detail & Related papers (2021-05-01T17:47:40Z) - Spatial-Temporal Correlation and Topology Learning for Person
Re-Identification in Videos [78.45050529204701]
We propose a novel framework to pursue discriminative and robust representation by modeling cross-scale spatial-temporal correlation.
CTL utilizes a CNN backbone and a key-points estimator to extract semantic local features from human body.
It explores a context-reinforced topology to construct multi-scale graphs by considering both global contextual information and physical connections of human body.
arXiv Detail & Related papers (2021-04-15T14:32:12Z) - Spiking Neural Networks -- Part II: Detecting Spatio-Temporal Patterns [38.518936229794214]
Spiking Neural Networks (SNNs) have the unique ability to detect information in encoded-temporal signals.
We review models and training algorithms for the dominant approach that considers SNNs as a Recurrent Neural Network (RNN)
We describe an alternative approach that relies on probabilistic models for spiking neurons, allowing the derivation of local learning rules via gradient estimates.
arXiv Detail & Related papers (2020-10-27T11:47:42Z) - Incorporating Learnable Membrane Time Constant to Enhance Learning of
Spiking Neural Networks [36.16846259899793]
Spiking Neural Networks (SNNs) have attracted enormous research interest due to temporal information processing capability, low power consumption, and high biological plausibility.
Most existing learning methods learn weights only, and require manual tuning of the membrane-related parameters that determine the dynamics of a single spiking neuron.
In this paper, we take inspiration from the observation that membrane-related parameters are different across brain regions, and propose a training algorithm that is capable of learning not only the synaptic weights but also the membrane time constants of SNNs.
arXiv Detail & Related papers (2020-07-11T14:35:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.