XSleepNet: Multi-View Sequential Model for Automatic Sleep Staging
- URL: http://arxiv.org/abs/2007.05492v4
- Date: Wed, 31 Mar 2021 21:52:00 GMT
- Title: XSleepNet: Multi-View Sequential Model for Automatic Sleep Staging
- Authors: Huy Phan, Oliver Y. Ch\'en, Minh C. Tran, Philipp Koch, Alfred
Mertins, Maarten De Vos
- Abstract summary: XSleepNet is capable of learning a joint representation from both raw signals and time-frequency images.
XSleepNet consistently outperforms the single-view baselines and the multi-view baseline with a simple fusion strategy.
- Score: 20.431381506373395
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automating sleep staging is vital to scale up sleep assessment and diagnosis
to serve millions experiencing sleep deprivation and disorders and enable
longitudinal sleep monitoring in home environments. Learning from raw
polysomnography signals and their derived time-frequency image representations
has been prevalent. However, learning from multi-view inputs (e.g., both the
raw signals and the time-frequency images) for sleep staging is difficult and
not well understood. This work proposes a sequence-to-sequence sleep staging
model, XSleepNet, that is capable of learning a joint representation from both
raw signals and time-frequency images. Since different views may generalize or
overfit at different rates, the proposed network is trained such that the
learning pace on each view is adapted based on their generalization/overfitting
behavior. In simple terms, the learning on a particular view is speeded up when
it is generalizing well and slowed down when it is overfitting. View-specific
generalization/overfitting measures are computed on-the-fly during the training
course and used to derive weights to blend the gradients from different views.
As a result, the network is able to retain the representation power of
different views in the joint features which represent the underlying
distribution better than those learned by each individual view alone.
Furthermore, the XSleepNet architecture is principally designed to gain
robustness to the amount of training data and to increase the complementarity
between the input views. Experimental results on five databases of different
sizes show that XSleepNet consistently outperforms the single-view baselines
and the multi-view baseline with a simple fusion strategy. Finally, XSleepNet
also outperforms prior sleep staging methods and improves previous
state-of-the-art results on the experimental databases.
Related papers
- One Diffusion to Generate Them All [54.82732533013014]
OneDiffusion is a versatile, large-scale diffusion model that supports bidirectional image synthesis and understanding.
It enables conditional generation from inputs such as text, depth, pose, layout, and semantic maps.
OneDiffusion allows for multi-view generation, camera pose estimation, and instant personalization using sequential image inputs.
arXiv Detail & Related papers (2024-11-25T12:11:05Z) - ST-USleepNet: A Spatial-Temporal Coupling Prominence Network for Multi-Channel Sleep Staging [9.83413257745779]
We propose a novel framework named ST-USleepNet, comprising a spatial-temporal graph construction module (ST) and a U-shaped sleep network (USleepNet)
The ST module converts raw signals into a spatial-temporal graph based on signal similarity, temporal, and spatial relationships to model spatial-temporal coupling patterns.
The USleepNet employs a U-shaped structure for both the temporal and spatial streams, mirroring its original use in image segmentation to isolate significant targets.
arXiv Detail & Related papers (2024-08-21T14:57:44Z) - WaveSleepNet: An Interpretable Network for Expert-like Sleep Staging [4.4697567606459545]
WaveSleepNet is an interpretable neural network for sleep staging.
WaveSleepNet uses latent space representations to identify characteristic wave prototypes corresponding to different sleep stages.
The efficacy of WaveSleepNet is validated across three public datasets.
arXiv Detail & Related papers (2024-04-11T03:47:58Z) - Similarity Min-Max: Zero-Shot Day-Night Domain Adaptation [52.923298434948606]
Low-light conditions not only hamper human visual experience but also degrade the model's performance on downstream vision tasks.
This paper challenges a more complicated scenario with border applicability, i.e., zero-shot day-night domain adaptation.
We propose a similarity min-max paradigm that considers them under a unified framework.
arXiv Detail & Related papers (2023-07-17T18:50:15Z) - Quantifying the Impact of Data Characteristics on the Transferability of
Sleep Stage Scoring Models [0.10878040851637998]
Deep learning models for scoring sleep stages based on single-channel EEG have been proposed as a promising method for remote sleep monitoring.
Applying these models to new datasets, particularly from wearable devices, raises two questions.
First, when annotations on a target dataset are unavailable, which different data characteristics affect the sleep stage scoring performance the most and by how much?
We propose a novel method for quantifying the impact of different data characteristics on the transferability of deep learning models.
arXiv Detail & Related papers (2023-03-28T07:57:21Z) - L-SeqSleepNet: Whole-cycle Long Sequence Modelling for Automatic Sleep
Staging [16.96499618061823]
L-SeqSleepNet is a new deep learning model that takes into account whole-cycle sleep information for sleep staging.
L-SeqSleepNet is able to alleviate the predominance of N2 sleep to bring down errors in other sleep stages.
arXiv Detail & Related papers (2023-01-09T15:44:43Z) - ProductGraphSleepNet: Sleep Staging using Product Spatio-Temporal Graph
Learning with Attentive Temporal Aggregation [4.014524824655106]
This work proposes an adaptive product graph learning-based graph convolutional network, named ProductGraphSleepNet, for learning joint-temporal graphs.
The proposed network makes it possible for clinicians to comprehend and interpret the learned connectivity graphs for sleep stages.
arXiv Detail & Related papers (2022-12-09T14:34:58Z) - Temporal Graph Network Embedding with Causal Anonymous Walks
Representations [54.05212871508062]
We propose a novel approach for dynamic network representation learning based on Temporal Graph Network.
For evaluation, we provide a benchmark pipeline for the evaluation of temporal network embeddings.
We show the applicability and superior performance of our model in the real-world downstream graph machine learning task provided by one of the top European banks.
arXiv Detail & Related papers (2021-08-19T15:39:52Z) - Convolutional Neural Networks for Sleep Stage Scoring on a Two-Channel
EEG Signal [63.18666008322476]
Sleep problems are one of the major diseases all over the world.
Basic tool used by specialists is the Polysomnogram, which is a collection of different signals recorded during sleep.
Specialists have to score the different signals according to one of the standard guidelines.
arXiv Detail & Related papers (2021-03-30T09:59:56Z) - CharacterGAN: Few-Shot Keypoint Character Animation and Reposing [64.19520387536741]
We introduce CharacterGAN, a generative model that can be trained on only a few samples of a given character.
Our model generates novel poses based on keypoint locations, which can be modified in real time while providing interactive feedback.
We show that our approach outperforms recent baselines and creates realistic animations for diverse characters.
arXiv Detail & Related papers (2021-02-05T12:38:15Z) - Unsupervised Learning on Monocular Videos for 3D Human Pose Estimation [121.5383855764944]
We use contrastive self-supervised learning to extract rich latent vectors from single-view videos.
We show that applying CSS only to the time-variant features, while also reconstructing the input and encouraging a gradual transition between nearby and away features, yields a rich latent space.
Our approach outperforms other unsupervised single-view methods and matches the performance of multi-view techniques.
arXiv Detail & Related papers (2020-12-02T20:27:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.