Automatic sleep stage classification with deep residual networks in a
mixed-cohort setting
- URL: http://arxiv.org/abs/2008.09416v1
- Date: Fri, 21 Aug 2020 10:48:35 GMT
- Title: Automatic sleep stage classification with deep residual networks in a
mixed-cohort setting
- Authors: Alexander Neergaard Olesen, Poul Jennum, Emmanuel Mignot, Helge B D
Sorensen
- Abstract summary: We developed a novel deep neural network model to assess the generalizability of several large-scale cohorts.
Overall classification accuracy improved with increasing fractions of training data.
- Score: 63.52264764099532
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Study Objectives: Sleep stage scoring is performed manually by sleep experts
and is prone to subjective interpretation of scoring rules with low intra- and
interscorer reliability. Many automatic systems rely on few small-scale
databases for developing models, and generalizability to new datasets is thus
unknown. We investigated a novel deep neural network to assess the
generalizability of several large-scale cohorts.
Methods: A deep neural network model was developed using 15684
polysomnography studies from five different cohorts. We applied four different
scenarios: 1) impact of varying time-scales in the model; 2) performance of a
single cohort on other cohorts of smaller, greater or equal size relative to
the performance of other cohorts on a single cohort; 3) varying the fraction of
mixed-cohort training data compared to using single-origin data; and 4)
comparing models trained on combinations of data from 2, 3, and 4 cohorts.
Results: Overall classification accuracy improved with increasing fractions
of training data (0.25$\%$: 0.782 $\pm$ 0.097, 95$\%$ CI [0.777-0.787];
100$\%$: 0.869 $\pm$ 0.064, 95$\%$ CI [0.864-0.872]), and with increasing
number of data sources (2: 0.788 $\pm$ 0.102, 95$\%$ CI [0.787-0.790]; 3: 0.808
$\pm$ 0.092, 95$\%$ CI [0.807-0.810]; 4: 0.821 $\pm$ 0.085, 95$\%$ CI
[0.819-0.823]). Different cohorts show varying levels of generalization to
other cohorts.
Conclusions: Automatic sleep stage scoring systems based on deep learning
algorithms should consider as much data as possible from as many sources
available to ensure proper generalization. Public datasets for benchmarking
should be made available for future research.
Related papers
- Harnessing Increased Client Participation with Cohort-Parallel Federated Learning [2.9593087583214173]
Federated Learning (FL) is a machine learning approach where nodes collaboratively train a global model.
We introduce Cohort-Parallel Federated Learning (CPFL), a novel learning approach where each cohort independently trains a global model.
CPFL with four cohorts, non-IID data distribution, and CIFAR-10 yields a 1.9$times$ reduction in train time and a 1.3$times$ reduction in resource usage.
arXiv Detail & Related papers (2024-05-24T15:34:09Z) - E(2) Equivariant Neural Networks for Robust Galaxy Morphology
Classification [0.0]
We train, validate, and test GCNNs equivariant to discrete subgroups of $E(2)$ on the Galaxy10 DECals dataset.
An architecture equivariant to the group $D_16$ achieves a $95.52 pm 0.18%$ test-set accuracy.
All GCNNs are less susceptible to one-pixel perturbations than an identically constructed CNN.
arXiv Detail & Related papers (2023-11-02T18:00:02Z) - Offline Reinforcement Learning at Multiple Frequencies [62.08749079914275]
We study how well offline reinforcement learning algorithms can accommodate data with a mixture of frequencies during training.
We present a simple yet effective solution that enforces consistency in the rate of $Q$-value updates to stabilize learning.
arXiv Detail & Related papers (2022-07-26T17:54:49Z) - Deconstructing Distributions: A Pointwise Framework of Learning [15.517383696434162]
We study a point's $textitprofile$: the relationship between models' average performance on the test distribution and their pointwise performance on this individual point.
We find that profiles can yield new insights into the structure of both models and data -- in and out-of-distribution.
arXiv Detail & Related papers (2022-02-20T23:25:28Z) - Test-time Batch Statistics Calibration for Covariate Shift [66.7044675981449]
We propose to adapt the deep models to the novel environment during inference.
We present a general formulation $alpha$-BN to calibrate the batch statistics.
We also present a novel loss function to form a unified test time adaptation framework Core.
arXiv Detail & Related papers (2021-10-06T08:45:03Z) - Ensemble of Convolution Neural Networks on Heterogeneous Signals for
Sleep Stage Scoring [63.30661835412352]
This paper explores and compares the convenience of using additional signals apart from electroencephalograms.
The best overall model, an ensemble of Depth-wise Separational Convolutional Neural Networks, has achieved an accuracy of 86.06%.
arXiv Detail & Related papers (2021-07-23T06:37:38Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z) - Towards an Understanding of Benign Overfitting in Neural Networks [104.2956323934544]
Modern machine learning models often employ a huge number of parameters and are typically optimized to have zero training loss.
We examine how these benign overfitting phenomena occur in a two-layer neural network setting.
We show that it is possible for the two-layer ReLU network interpolator to achieve a near minimax-optimal learning rate.
arXiv Detail & Related papers (2021-06-06T19:08:53Z) - Learning Realistic Patterns from Unrealistic Stimuli: Generalization and
Data Anonymization [0.5091527753265949]
This work investigates a simple yet unconventional approach for anonymized data synthesis to enable third parties to benefit from such private data.
We use sleep monitoring data from both an open and a large closed clinical study and evaluate whether (1) end-users can create and successfully use customized classification models for sleep apnea detection, and (2) the identity of participants in the study is protected.
arXiv Detail & Related papers (2020-09-21T16:31:21Z) - Question Type Classification Methods Comparison [0.0]
The paper presents a comparative study of state-of-the-art approaches for question classification task: Logistic Regression, Convolutional Neural Networks (CNN), Long Short-Term Memory Network (LSTM) and Quasi-Recurrent Neural Networks (QRNN)
All models use pre-trained GLoVe word embeddings and trained on human-labeled data.
The best accuracy is achieved using CNN model with five convolutional layers and various kernel sizes stacked in parallel, followed by one fully connected layer.
arXiv Detail & Related papers (2020-01-03T00:16:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.