SalientSleepNet: Multimodal Salient Wave Detection Network for Sleep
Staging
- URL: http://arxiv.org/abs/2105.13864v1
- Date: Mon, 24 May 2021 16:32:09 GMT
- Title: SalientSleepNet: Multimodal Salient Wave Detection Network for Sleep
Staging
- Authors: Ziyu Jia, Youfang Lin, Jing Wang, Xuehui Wang, Peiyi Xie and Yingbin
Zhang
- Abstract summary: We propose SalientSleepNet, a salient wave detection network for sleep staging.
It is composed of two independent $rm U2$-like streams to extract salient features from multimodal data.
Experiments on the two datasets demonstrate that SalientSleepNet outperforms the state-of-the-art baselines.
- Score: 10.269152939137854
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sleep staging is fundamental for sleep assessment and disease diagnosis.
Although previous attempts to classify sleep stages have achieved high
classification performance, several challenges remain open: 1) How to
effectively extract salient waves in multimodal sleep data; 2) How to capture
the multi-scale transition rules among sleep stages; 3) How to adaptively seize
the key role of specific modality for sleep staging. To address these
challenges, we propose SalientSleepNet, a multimodal salient wave detection
network for sleep staging. Specifically, SalientSleepNet is a temporal fully
convolutional network based on the $\rm U^2$-Net architecture that is
originally proposed for salient object detection in computer vision. It is
mainly composed of two independent $\rm U^2$-like streams to extract the
salient features from multimodal data, respectively. Meanwhile, the multi-scale
extraction module is designed to capture multi-scale transition rules among
sleep stages. Besides, the multimodal attention module is proposed to
adaptively capture valuable information from multimodal data for the specific
sleep stage. Experiments on the two datasets demonstrate that SalientSleepNet
outperforms the state-of-the-art baselines. It is worth noting that this model
has the least amount of parameters compared with the existing deep neural
network models.
Related papers
- ST-USleepNet: A Spatial-Temporal Coupling Prominence Network for Multi-Channel Sleep Staging [9.83413257745779]
Sleep staging is critical for assessing sleep quality and diagnosing disorders.
Recent advancements in artificial intelligence have driven the development of automated sleep staging models.
We propose a novel framework named ST-USleepNet, comprising a spatial-temporal graph construction module and a U-shaped sleep network.
arXiv Detail & Related papers (2024-08-21T14:57:44Z) - WaveSleepNet: An Interpretable Network for Expert-like Sleep Staging [4.4697567606459545]
WaveSleepNet is an interpretable neural network for sleep staging.
WaveSleepNet uses latent space representations to identify characteristic wave prototypes corresponding to different sleep stages.
The efficacy of WaveSleepNet is validated across three public datasets.
arXiv Detail & Related papers (2024-04-11T03:47:58Z) - Quantifying the Impact of Data Characteristics on the Transferability of
Sleep Stage Scoring Models [0.10878040851637998]
Deep learning models for scoring sleep stages based on single-channel EEG have been proposed as a promising method for remote sleep monitoring.
Applying these models to new datasets, particularly from wearable devices, raises two questions.
First, when annotations on a target dataset are unavailable, which different data characteristics affect the sleep stage scoring performance the most and by how much?
We propose a novel method for quantifying the impact of different data characteristics on the transferability of deep learning models.
arXiv Detail & Related papers (2023-03-28T07:57:21Z) - CoRe-Sleep: A Multimodal Fusion Framework for Time Series Robust to
Imperfect Modalities [10.347153539399836]
CoRe-Sleep is a Coordinated Representation multimodal fusion network.
We show how appropriately handling multimodal information can be the key to achieving such robustness.
This work aims at bridging the gap between automated analysis tools and their clinical utility.
arXiv Detail & Related papers (2023-03-27T18:28:58Z) - MM-TTA: Multi-Modal Test-Time Adaptation for 3D Semantic Segmentation [104.48766162008815]
We propose and explore a new multi-modal extension of test-time adaptation for 3D semantic segmentation.
To design a framework that can take full advantage of multi-modality, each modality provides regularized self-supervisory signals to other modalities.
Our regularized pseudo labels produce stable self-learning signals in numerous multi-modal test-time adaptation scenarios.
arXiv Detail & Related papers (2022-04-27T02:28:12Z) - Weakly Aligned Feature Fusion for Multimodal Object Detection [52.15436349488198]
multimodal data often suffer from the position shift problem, i.e., the image pair is not strictly aligned.
This problem makes it difficult to fuse multimodal features and puzzles the convolutional neural network (CNN) training.
In this article, we propose a general multimodal detector named aligned region CNN (AR-CNN) to tackle the position shift problem.
arXiv Detail & Related papers (2022-04-21T02:35:23Z) - TransSleep: Transitioning-aware Attention-based Deep Neural Network for
Sleep Staging [2.105172041656126]
We propose a novel deep neural network structure, TransSleep, that captures distinctive local temporal patterns.
Results show that TransSleep achieves promising performance in automatic sleep staging.
arXiv Detail & Related papers (2022-03-22T08:55:32Z) - Routing with Self-Attention for Multimodal Capsule Networks [108.85007719132618]
We present a new multimodal capsule network that allows us to leverage the strength of capsules in the context of a multimodal learning framework.
To adapt the capsules to large-scale input data, we propose a novel routing by self-attention mechanism that selects relevant capsules.
This allows not only for robust training with noisy video data, but also to scale up the size of the capsule network compared to traditional routing methods.
arXiv Detail & Related papers (2021-12-01T19:01:26Z) - Convolutional Neural Networks for Sleep Stage Scoring on a Two-Channel
EEG Signal [63.18666008322476]
Sleep problems are one of the major diseases all over the world.
Basic tool used by specialists is the Polysomnogram, which is a collection of different signals recorded during sleep.
Specialists have to score the different signals according to one of the standard guidelines.
arXiv Detail & Related papers (2021-03-30T09:59:56Z) - Adaptive Context-Aware Multi-Modal Network for Depth Completion [107.15344488719322]
We propose to adopt the graph propagation to capture the observed spatial contexts.
We then apply the attention mechanism on the propagation, which encourages the network to model the contextual information adaptively.
Finally, we introduce the symmetric gated fusion strategy to exploit the extracted multi-modal features effectively.
Our model, named Adaptive Context-Aware Multi-Modal Network (ACMNet), achieves the state-of-the-art performance on two benchmarks.
arXiv Detail & Related papers (2020-08-25T06:00:06Z) - M2Net: Multi-modal Multi-channel Network for Overall Survival Time
Prediction of Brain Tumor Patients [151.4352001822956]
Early and accurate prediction of overall survival (OS) time can help to obtain better treatment planning for brain tumor patients.
Existing prediction methods rely on radiomic features at the local lesion area of a magnetic resonance (MR) volume.
We propose an end-to-end OS time prediction model; namely, Multi-modal Multi-channel Network (M2Net)
arXiv Detail & Related papers (2020-06-01T05:21:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.