SlAction: Non-intrusive, Lightweight Obstructive Sleep Apnea Detection
using Infrared Video
- URL: http://arxiv.org/abs/2309.02713v1
- Date: Wed, 6 Sep 2023 04:52:02 GMT
- Title: SlAction: Non-intrusive, Lightweight Obstructive Sleep Apnea Detection
using Infrared Video
- Authors: You Rim Choi, Gyeongseon Eo, Wonhyuck Youn, Hyojin Lee, Haemin Jang,
Dongyoon Kim, Hyunwoo Shin, Hyung-Sin Kim
- Abstract summary: Obstructive sleep apnea (OSA) is a prevalent sleep disorder affecting approximately one billion people world-wide.
We present SlAction, a non-intrusive OSA detection system for daily sleep environments using infrared videos.
- Score: 1.850099608285478
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Obstructive sleep apnea (OSA) is a prevalent sleep disorder affecting
approximately one billion people world-wide. The current gold standard for
diagnosing OSA, Polysomnography (PSG), involves an overnight hospital stay with
multiple attached sensors, leading to potential inaccuracies due to the
first-night effect. To address this, we present SlAction, a non-intrusive OSA
detection system for daily sleep environments using infrared videos.
Recognizing that sleep videos exhibit minimal motion, this work investigates
the fundamental question: "Are respiratory events adequately reflected in human
motions during sleep?" Analyzing the largest sleep video dataset of 5,098
hours, we establish correlations between OSA events and human motions during
sleep. Our approach uses a low frame rate (2.5 FPS), a large size (60 seconds)
and step (30 seconds) for sliding window analysis to capture slow and long-term
motions related to OSA. Furthermore, we utilize a lightweight deep neural
network for resource-constrained devices, ensuring all video streams are
processed locally without compromising privacy. Evaluations show that SlAction
achieves an average F1 score of 87.6% in detecting OSA across various
environments. Implementing SlAction on NVIDIA Jetson Nano enables real-time
inference (~3 seconds for a 60-second video clip), highlighting its potential
for early detection and personalized treatment of OSA.
Related papers
- SleepVST: Sleep Staging from Near-Infrared Video Signals using Pre-Trained Transformers [0.6599755599064447]
We introduce SleepVST, a transformer model which enables state-of-the-art performance in camera-based sleep stage classification.
We show that SleepVST can be successfully transferred to cardio-respiratory waveforms extracted from video, enabling fully contact-free sleep staging.
arXiv Detail & Related papers (2024-04-04T23:24:14Z) - Deep Learning-Enabled Sleep Staging From Vital Signs and Activity
Measured Using a Near-Infrared Video Camera [1.0499611180329802]
We use heart rate, breathing rate and activity measures, all derived from a near-infrared video camera, to perform sleep stage classification.
We achieve an accuracy of 73.4% and a Cohen's kappa of 0.61 in four-class sleep stage classification.
arXiv Detail & Related papers (2023-06-06T14:21:22Z) - Attention-based Learning for Sleep Apnea and Limb Movement Detection
using Wi-Fi CSI Signals [6.682252544052753]
We propose the attention-based learning for sleep apnea and limb movement detection (ALESAL) system.
Our proposed ALESAL system can achieve a weighted F1-score of 84.33, outperforming the other existing non-attention based methods of support vector machine and deep multilayer perceptron.
arXiv Detail & Related papers (2023-03-26T19:40:37Z) - SleepMore: Sleep Prediction at Scale via Multi-Device WiFi Sensing [0.0]
We propose SleepMore, an accurate and easy-to-deploy sleep-tracking approach based on machine learning over the user's WiFi network activity.
We validate SleepMore using data from a month-long user study involving 46 college students and draw comparisons with the Oura Ring wearable.
Our results demonstrate that SleepMore produces statistically indistinguishable sleep statistics from the Oura ring baseline for predictions made within a 5% uncertainty rate.
arXiv Detail & Related papers (2022-10-24T16:42:56Z) - ETAD: A Unified Framework for Efficient Temporal Action Detection [70.21104995731085]
Untrimmed video understanding such as temporal action detection (TAD) often suffers from the pain of huge demand for computing resources.
We build a unified framework for efficient end-to-end temporal action detection (ETAD)
ETAD achieves state-of-the-art performance on both THUMOS-14 and ActivityNet-1.3.
arXiv Detail & Related papers (2022-05-14T21:16:21Z) - E^2TAD: An Energy-Efficient Tracking-based Action Detector [78.90585878925545]
This paper presents a tracking-based solution to accurately and efficiently localize predefined key actions.
It won first place in the UAV-Video Track of 2021 Low-Power Computer Vision Challenge (LPCVC)
arXiv Detail & Related papers (2022-04-09T07:52:11Z) - Argus++: Robust Real-time Activity Detection for Unconstrained Video
Streams with Overlapping Cube Proposals [85.76513755331318]
Argus++ is a robust real-time activity detection system for analyzing unconstrained video streams.
The overall system is optimized for real-time processing on standalone consumer-level hardware.
arXiv Detail & Related papers (2022-01-14T03:35:22Z) - In-Bed Person Monitoring Using Thermal Infrared Sensors [53.561797148529664]
We use 'Griddy', a prototype with a Panasonic Grid-EYE, a low-resolution infrared thermopile array sensor, which offers more privacy.
For this purpose, two datasets were captured, one (480 images) under constant conditions, and a second one (200 images) under different variations.
We test three machine learning algorithms: Support Vector Machines (SVM), k-Nearest Neighbors (k-NN) and Neural Network (NN)
arXiv Detail & Related papers (2021-07-16T15:59:07Z) - Human POSEitioning System (HPS): 3D Human Pose Estimation and
Self-localization in Large Scenes from Body-Mounted Sensors [71.29186299435423]
We introduce (HPS) Human POSEitioning System, a method to recover the full 3D pose of a human registered with a 3D scan of the surrounding environment.
We show that our optimization-based integration exploits the benefits of the two, resulting in pose accuracy free of drift.
HPS could be used for VR/AR applications where humans interact with the scene without requiring direct line of sight with an external camera.
arXiv Detail & Related papers (2021-03-31T17:58:31Z) - WiSleep: Scalable Sleep Monitoring and Analytics Using Passive WiFi
Sensing [0.0]
WiSleep is a sleep monitoring and analytics platform using smartphone network connections that are passively sensed from WiFi infrastructure.
We propose an unsupervised ensemble model of Bayesian change point detection to predict sleep and wake-up times.
We show that WiSleep can process data from 20,000 users on a single commodity server, allowing it to scale to large campus populations with low server requirements.
arXiv Detail & Related papers (2021-02-07T00:05:14Z) - MSED: a multi-modal sleep event detection model for clinical sleep
analysis [62.997667081978825]
We designed a single deep neural network architecture to jointly detect sleep events in a polysomnogram.
The performance of the model was quantified by F1, precision, and recall scores, and by correlating index values to clinical values.
arXiv Detail & Related papers (2021-01-07T13:08:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.