Thermal Imaging and Radar for Remote Sleep Monitoring of Breathing and Apnea
- URL: http://arxiv.org/abs/2407.11936v2
- Date: Wed, 7 Aug 2024 06:43:51 GMT
- Title: Thermal Imaging and Radar for Remote Sleep Monitoring of Breathing and Apnea
- Authors: Kai Del Regno, Alexander Vilesov, Adnan Armouti, Anirudh Bindiganavale Harish, Selim Emir Can, Ashley Kita, Achuta Kadambi,
- Abstract summary: We show the first comparison of radar and thermal imaging for sleep monitoring.
Our thermal imaging method detects apneas with an accuracy of 0.99, a precision of 0.68, a recall of 0.74, an F1 score of 0.71, and an intra-class correlation of 0.73.
We present a novel proposal for classifying obstructive and central sleep apnea by leveraging a multimodal setup.
- Score: 42.00356210257671
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Polysomnography (PSG), the current gold standard method for monitoring and detecting sleep disorders, is cumbersome and costly. At-home testing solutions, known as home sleep apnea testing (HSAT), exist. However, they are contact-based, a feature which limits the ability of some patient populations to tolerate testing and discourages widespread deployment. Previous work on non-contact sleep monitoring for sleep apnea detection either estimates respiratory effort using radar or nasal airflow using a thermal camera, but has not compared the two or used them together. We conducted a study on 10 participants, ages 34 - 78, with suspected sleep disorders using a hardware setup with a synchronized radar and thermal camera. We show the first comparison of radar and thermal imaging for sleep monitoring, and find that our thermal imaging method outperforms radar significantly. Our thermal imaging method detects apneas with an accuracy of 0.99, a precision of 0.68, a recall of 0.74, an F1 score of 0.71, and an intra-class correlation of 0.70; our radar method detects apneas with an accuracy of 0.83, a precision of 0.13, a recall of 0.86, an F1 score of 0.22, and an intra-class correlation of 0.13. We also present a novel proposal for classifying obstructive and central sleep apnea by leveraging a multimodal setup. This method could be used accurately detect and classify apneas during sleep with non-contact sensors, thereby improving diagnostic capacities in patient populations unable to tolerate current technology.
Related papers
- SleepFM: Multi-modal Representation Learning for Sleep Across Brain Activity, ECG and Respiratory Signals [17.416001617612658]
Sleep is a complex physiological process evaluated through various modalities recording electrical brain, cardiac, and respiratory activities.
We developed SleepFM, the first multi-modal foundation model for sleep analysis.
We show that a novel leave-one-out approach for contrastive learning significantly improves downstream task performance.
arXiv Detail & Related papers (2024-05-28T02:43:53Z) - SleepVST: Sleep Staging from Near-Infrared Video Signals using Pre-Trained Transformers [0.6599755599064447]
We introduce SleepVST, a transformer model which enables state-of-the-art performance in camera-based sleep stage classification.
We show that SleepVST can be successfully transferred to cardio-respiratory waveforms extracted from video, enabling fully contact-free sleep staging.
arXiv Detail & Related papers (2024-04-04T23:24:14Z) - SOUL: An Energy-Efficient Unsupervised Online Learning Seizure Detection
Classifier [68.8204255655161]
Implantable devices that record neural activity and detect seizures have been adopted to issue warnings or trigger neurostimulation to suppress seizures.
For an implantable seizure detection system, a low power, at-the-edge, online learning algorithm can be employed to dynamically adapt to neural signal drifts.
SOUL was fabricated in TSMC's 28 nm process occupying 0.1 mm2 and achieves 1.5 nJ/classification energy efficiency, which is at least 24x more efficient than state-of-the-art.
arXiv Detail & Related papers (2021-10-01T23:01:20Z) - The Report on China-Spain Joint Clinical Testing for Rapid COVID-19 Risk
Screening by Eye-region Manifestations [59.48245489413308]
We developed and tested a COVID-19 rapid prescreening model using the eye-region images captured in China and Spain with cellphone cameras.
The performance was measured using area under receiver-operating-characteristic curve (AUC), sensitivity, specificity, accuracy, and F1.
arXiv Detail & Related papers (2021-09-18T02:28:01Z) - In-Bed Person Monitoring Using Thermal Infrared Sensors [53.561797148529664]
We use 'Griddy', a prototype with a Panasonic Grid-EYE, a low-resolution infrared thermopile array sensor, which offers more privacy.
For this purpose, two datasets were captured, one (480 images) under constant conditions, and a second one (200 images) under different variations.
We test three machine learning algorithms: Support Vector Machines (SVM), k-Nearest Neighbors (k-NN) and Neural Network (NN)
arXiv Detail & Related papers (2021-07-16T15:59:07Z) - Convolutional Neural Networks for Sleep Stage Scoring on a Two-Channel
EEG Signal [63.18666008322476]
Sleep problems are one of the major diseases all over the world.
Basic tool used by specialists is the Polysomnogram, which is a collection of different signals recorded during sleep.
Specialists have to score the different signals according to one of the standard guidelines.
arXiv Detail & Related papers (2021-03-30T09:59:56Z) - Temporal convolutional networks and transformers for classifying the
sleep stage in awake or asleep using pulse oximetry signals [0.0]
We develop a network architecture with the aim of classifying the sleep stage in awake or asleep using only HR signals from a pulse oximeter.
Transformers are able to model the sequence, learning the transition rules between sleep stages.
The overall accuracy, specificity, sensibility, and Cohen's Kappa coefficient were 90.0%, 94.9%, 78.1%, and 0.73.
arXiv Detail & Related papers (2021-01-29T22:58:33Z) - MSED: a multi-modal sleep event detection model for clinical sleep
analysis [62.997667081978825]
We designed a single deep neural network architecture to jointly detect sleep events in a polysomnogram.
The performance of the model was quantified by F1, precision, and recall scores, and by correlating index values to clinical values.
arXiv Detail & Related papers (2021-01-07T13:08:44Z) - Classifying sleep-wake stages through recurrent neural networks using
pulse oximetry signals [0.0]
The regulation of the autonomic nervous system changes with the sleep stages.
We exploit these changes with the aim of classifying the sleep stages in awake or asleep using pulse oximeter signals.
We applied a recurrent neural network to heart rate and peripheral oxygen saturation signals to classify the sleep stage every 30 seconds.
arXiv Detail & Related papers (2020-08-07T21:43:46Z) - DeepBeat: A multi-task deep learning approach to assess signal quality
and arrhythmia detection in wearable devices [0.0]
We develop a multi-task deep learning method to assess signal quality and arrhythmia event detection in wearable photoplethysmography devices for real-time detection of atrial fibrillation (AF)
We train our algorithm on over one million simulated unlabeled physiological signals and fine-tune on a curated dataset of over 500K labeled signals from over 100 individuals from 3 different wearable devices.
We show that two-stage training can help address the unbalanced data problem common to biomedical applications where large well-annotated datasets are scarce.
arXiv Detail & Related papers (2020-01-01T07:41:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.