Personalized Automatic Sleep Staging with Single-Night Data: a Pilot
Study with KL-Divergence Regularization
- URL: http://arxiv.org/abs/2004.11349v2
- Date: Mon, 11 May 2020 23:16:17 GMT
- Title: Personalized Automatic Sleep Staging with Single-Night Data: a Pilot
Study with KL-Divergence Regularization
- Authors: Huy Phan, Kaare Mikkelsen, Oliver Y. Ch\'en, Philipp Koch, Alfred
Mertins, Preben Kidmose, Maarten De Vos
- Abstract summary: We propose a Kullback-Leibler (KL) divergence regularized transfer learning approach to address this problem.
We employ the pretrained SeqSleepNet as a starting point and finetune it with the single-night personalization data to derive the personalized model.
Experimental results on the Sleep-EDF Expanded database with 75 subjects show that sleep staging personalization with a single-night data is possible with help of the proposed KL-divergence regularization.
- Score: 18.754100926147903
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Brain waves vary between people. An obvious way to improve automatic sleep
staging for longitudinal sleep monitoring is personalization of algorithms
based on individual characteristics extracted from the first night of data. As
a single night is a very small amount of data to train a sleep staging model,
we propose a Kullback-Leibler (KL) divergence regularized transfer learning
approach to address this problem. We employ the pretrained SeqSleepNet (i.e.
the subject independent model) as a starting point and finetune it with the
single-night personalization data to derive the personalized model. This is
done by adding the KL divergence between the output of the subject independent
model and the output of the personalized model to the loss function during
finetuning. In effect, KL-divergence regularization prevents the personalized
model from overfitting to the single-night data and straying too far away from
the subject independent model. Experimental results on the Sleep-EDF Expanded
database with 75 subjects show that sleep staging personalization with a
single-night data is possible with help of the proposed KL-divergence
regularization. On average, we achieve a personalized sleep staging accuracy of
79.6%, a Cohen's kappa of 0.706, a macro F1-score of 73.0%, a sensitivity of
71.8%, and a specificity of 94.2%. We find both that the approach is robust
against overfitting and that it improves the accuracy by 4.5 percentage points
compared to non-personalization and 2.2 percentage points compared to
personalization without regularization.
Related papers
- SLEEPYLAND: trust begins with fair evaluation of automatic sleep staging models [0.0]
We present SLEEPYLAND, an open-source sleep staging evaluation framework.<n>It includes more than 220'000 hours in-domain (ID) sleep recordings, and more than 84'000 hours out-of-domain (OOD) sleep recordings.<n>We introduce SOMNUS, an ensemble combining models across architectures and channel setups via soft voting.
arXiv Detail & Related papers (2025-06-10T08:46:19Z) - Personalized Sleep Staging Leveraging Source-free Unsupervised Domain Adaptation [12.283567614448392]
We propose a novel Source-Free Unsupervised Individual Domain Adaptation (SF-UIDA) framework.
This two-step adaptation scheme allows the model to effectively adjust to new unlabeled individuals without needing source data.
Our framework has been applied to three established sleep staging models and tested on three public datasets, achieving state-of-the-art performance.
arXiv Detail & Related papers (2024-12-11T12:59:36Z) - Stubborn Lexical Bias in Data and Models [50.79738900885665]
We use a new statistical method to examine whether spurious patterns in data appear in models trained on the data.
We apply an optimization approach to *reweight* the training data, reducing thousands of spurious correlations.
Surprisingly, though this method can successfully reduce lexical biases in the training data, we still find strong evidence of corresponding bias in the trained models.
arXiv Detail & Related papers (2023-06-03T20:12:27Z) - Quantifying the Impact of Data Characteristics on the Transferability of
Sleep Stage Scoring Models [0.10878040851637998]
Deep learning models for scoring sleep stages based on single-channel EEG have been proposed as a promising method for remote sleep monitoring.
Applying these models to new datasets, particularly from wearable devices, raises two questions.
First, when annotations on a target dataset are unavailable, which different data characteristics affect the sleep stage scoring performance the most and by how much?
We propose a novel method for quantifying the impact of different data characteristics on the transferability of deep learning models.
arXiv Detail & Related papers (2023-03-28T07:57:21Z) - SA-DPSGD: Differentially Private Stochastic Gradient Descent based on
Simulated Annealing [25.25065807901922]
Differentially private gradient descent is the most popular training method with differential privacy in image recognition.
Existing DPSGD schemes lead to significant performance degradation, which prevents the application of differential privacy.
We propose a simulated annealing-based differentially private gradient descent scheme (SA-DPSGD) which accepts a candidate update with a probability that depends on the update quality and on the number of iterations.
arXiv Detail & Related papers (2022-11-14T09:20:48Z) - Mixed Differential Privacy in Computer Vision [133.68363478737058]
AdaMix is an adaptive differentially private algorithm for training deep neural network classifiers using both private and public image data.
A few-shot or even zero-shot learning baseline that ignores private data can outperform fine-tuning on a large private dataset.
arXiv Detail & Related papers (2022-03-22T06:15:43Z) - Sleep Staging Based on Serialized Dual Attention Network [0.0]
We propose a deep learning model SDAN based on raw EEG.
It serially combines the channel attention and spatial attention mechanisms to filter and highlight key information.
It achieves excellent results in the N1 sleep stage compared to other methods.
arXiv Detail & Related papers (2021-07-18T13:18:12Z) - Convolutional Neural Networks for Sleep Stage Scoring on a Two-Channel
EEG Signal [63.18666008322476]
Sleep problems are one of the major diseases all over the world.
Basic tool used by specialists is the Polysomnogram, which is a collection of different signals recorded during sleep.
Specialists have to score the different signals according to one of the standard guidelines.
arXiv Detail & Related papers (2021-03-30T09:59:56Z) - MSED: a multi-modal sleep event detection model for clinical sleep
analysis [62.997667081978825]
We designed a single deep neural network architecture to jointly detect sleep events in a polysomnogram.
The performance of the model was quantified by F1, precision, and recall scores, and by correlating index values to clinical values.
arXiv Detail & Related papers (2021-01-07T13:08:44Z) - RobustSleepNet: Transfer learning for automated sleep staging at scale [0.0]
Sleep disorder diagnosis relies on the analysis of polysomnography (PSG) records.
In practice, sleep stage classification relies on the visual inspection of 30-seconds epochs of polysomnography signals.
We introduce RobustSleepNet, a deep learning model for automatic sleep stage classification able to handle arbitrary PSG montages.
arXiv Detail & Related papers (2021-01-07T09:39:08Z) - Interpretable Machine Learning Approaches to Prediction of Chronic
Homelessness [2.294014185517203]
We introduce a machine learning approach to predict chronic homelessness from de-identified client shelter records.
Our model, HIFIS-RNN-MLP, incorporates both static and dynamic features of a client's history to forecast chronic homelessness 6 months into the client's future.
arXiv Detail & Related papers (2020-09-12T15:02:30Z) - Evaluating Prediction-Time Batch Normalization for Robustness under
Covariate Shift [81.74795324629712]
We call prediction-time batch normalization, which significantly improves model accuracy and calibration under covariate shift.
We show that prediction-time batch normalization provides complementary benefits to existing state-of-the-art approaches for improving robustness.
The method has mixed results when used alongside pre-training, and does not seem to perform as well under more natural types of dataset shift.
arXiv Detail & Related papers (2020-06-19T05:08:43Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.