MSED: a multi-modal sleep event detection model for clinical sleep
analysis
- URL: http://arxiv.org/abs/2101.02530v1
- Date: Thu, 7 Jan 2021 13:08:44 GMT
- Title: MSED: a multi-modal sleep event detection model for clinical sleep
analysis
- Authors: Alexander Neergaard Olesen, Poul Jennum, Emmanuel Mignot and Helge B.
D. Sorensen
- Abstract summary: We designed a single deep neural network architecture to jointly detect sleep events in a polysomnogram.
The performance of the model was quantified by F1, precision, and recall scores, and by correlating index values to clinical values.
- Score: 62.997667081978825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Study objective: Clinical sleep analysis require manual analysis of sleep
patterns for correct diagnosis of sleep disorders. Several studies show
significant variability in scoring discrete sleep events. We wished to
investigate, whether an automatic method could be used for detection of
arousals (Ar), leg movements (LM) and sleep disordered breathing (SDB) events,
and if the joint detection of these events performed better than having three
separate models.
Methods: We designed a single deep neural network architecture to jointly
detect sleep events in a polysomnogram. We trained the model on 1653 recordings
of individuals, and tested the optimized model on 1000 separate recordings. The
performance of the model was quantified by F1, precision, and recall scores,
and by correlating index values to clinical values using Pearson's correlation
coefficient.
Results: F1 scores for the optimized model was 0.70, 0.63, and 0.62 for Ar,
LM, and SDB, respectively. The performance was higher, when detecting events
jointly compared to corresponding single-event models. Index values computed
from detected events correlated well with manual annotations ($r^2$ = 0.73,
$r^2$ = 0.77, $r^2$ = 0.78, respectively).
Conclusion: Detecting arousals, leg movements and sleep disordered breathing
events jointly is possible, and the computed index values correlates well with
human annotations.
Related papers
- SleepFM: Multi-modal Representation Learning for Sleep Across Brain Activity, ECG and Respiratory Signals [17.416001617612658]
Sleep is a complex physiological process evaluated through various modalities recording electrical brain, cardiac, and respiratory activities.
We developed SleepFM, the first multi-modal foundation model for sleep analysis.
We show that a novel leave-one-out approach for contrastive learning significantly improves downstream task performance.
arXiv Detail & Related papers (2024-05-28T02:43:53Z) - A Federated Learning Framework for Stenosis Detection [70.27581181445329]
This study explores the use of Federated Learning (FL) for stenosis detection in coronary angiography images (CA)
Two heterogeneous datasets from two institutions were considered: dataset 1 includes 1219 images from 200 patients, which we acquired at the Ospedale Riuniti of Ancona (Italy)
dataset 2 includes 7492 sequential images from 90 patients from a previous study available in the literature.
arXiv Detail & Related papers (2023-10-30T11:13:40Z) - The effect of data augmentation and 3D-CNN depth on Alzheimer's Disease
detection [51.697248252191265]
This work summarizes and strictly observes best practices regarding data handling, experimental design, and model evaluation.
We focus on Alzheimer's Disease (AD) detection, which serves as a paradigmatic example of challenging problem in healthcare.
Within this framework, we train predictive 15 models, considering three different data augmentation strategies and five distinct 3D CNN architectures.
arXiv Detail & Related papers (2023-09-13T10:40:41Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - DynImp: Dynamic Imputation for Wearable Sensing Data Through Sensory and
Temporal Relatedness [78.98998551326812]
We argue that traditional methods have rarely made use of both times-series dynamics of the data as well as the relatedness of the features from different sensors.
We propose a model, termed as DynImp, to handle different time point's missingness with nearest neighbors along feature axis.
We show that the method can exploit the multi-modality features from related sensors and also learn from history time-series dynamics to reconstruct the data under extreme missingness.
arXiv Detail & Related papers (2022-09-26T21:59:14Z) - Convolutional Neural Networks for Sleep Stage Scoring on a Two-Channel
EEG Signal [63.18666008322476]
Sleep problems are one of the major diseases all over the world.
Basic tool used by specialists is the Polysomnogram, which is a collection of different signals recorded during sleep.
Specialists have to score the different signals according to one of the standard guidelines.
arXiv Detail & Related papers (2021-03-30T09:59:56Z) - Sleep Apnea and Respiratory Anomaly Detection from a Wearable Band and
Oxygen Saturation [1.2291501047353484]
There is a need in general medicine and critical care for a more convenient method to automatically detect sleep apnea from a simple, easy-to-wear device.
The objective is to automatically detect abnormal respiration and estimate the Apnea-Hypopnea-Index (AHI) with a wearable respiratory device.
Four models were trained: one each using the respiratory features only, a feature from the SpO2 (%)-signal only, and two additional models that use the respiratory features and the SpO2 (%)-feature.
arXiv Detail & Related papers (2021-02-24T02:04:57Z) - Temporal convolutional networks and transformers for classifying the
sleep stage in awake or asleep using pulse oximetry signals [0.0]
We develop a network architecture with the aim of classifying the sleep stage in awake or asleep using only HR signals from a pulse oximeter.
Transformers are able to model the sequence, learning the transition rules between sleep stages.
The overall accuracy, specificity, sensibility, and Cohen's Kappa coefficient were 90.0%, 94.9%, 78.1%, and 0.73.
arXiv Detail & Related papers (2021-01-29T22:58:33Z) - RED: Deep Recurrent Neural Networks for Sleep EEG Event Detection [0.0]
We propose a deep learning approach for sleep EEG event detection called Recurrent Event Detector (RED)
RED uses one of two input representations: a) the time-domain EEG signal, or b) a complex spectrogram of the signal obtained with the Continuous Wavelet Transform (CWT)
When evaluated on the MASS dataset, our detectors outperform the state of the art in both sleep spindle and K-complex detection with a mean F1-score of at least 80.9% and 82.6%, respectively.
arXiv Detail & Related papers (2020-05-15T21:48:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.