Rapid Extraction of Respiratory Waveforms from Photoplethysmography: A
Deep Encoder Approach
- URL: http://arxiv.org/abs/2212.12578v1
- Date: Thu, 22 Dec 2022 13:07:13 GMT
- Title: Rapid Extraction of Respiratory Waveforms from Photoplethysmography: A
Deep Encoder Approach
- Authors: Harry J. Davies and Danilo P. Mandic
- Abstract summary: Much of the information of breathing is contained within the photoplethysmography signal, through changes in blood flow, heart rate and stroke volume.
We aim to leverage this fact, by employing a novel deep learning framework which is a based on a repurposed convolutional autoencoder.
We show that the model is capable of producing respiratory waveforms that approach the gold standard, while in turn producing state of the art respiratory rate estimates.
- Score: 24.594587557319837
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Much of the information of breathing is contained within the
photoplethysmography (PPG) signal, through changes in venous blood flow, heart
rate and stroke volume. We aim to leverage this fact, by employing a novel deep
learning framework which is a based on a repurposed convolutional autoencoder.
Our model aims to encode all of the relevant respiratory information contained
within photoplethysmography waveform, and decode it into a waveform that is
similar to a gold standard respiratory reference. The model is employed on two
photoplethysmography data sets, namely Capnobase and BIDMC. We show that the
model is capable of producing respiratory waveforms that approach the gold
standard, while in turn producing state of the art respiratory rate estimates.
We also show that when it comes to capturing more advanced respiratory waveform
characteristics such as duty cycle, our model is for the most part
unsuccessful. A suggested reason for this, in light of a previous study on
in-ear PPG, is that the respiratory variations in finger-PPG are far weaker
compared with other recording locations. Importantly, our model can perform
these waveform estimates in a fraction of a millisecond, giving it the capacity
to produce over 6 hours of respiratory waveforms in a single second. Moreover,
we attempt to interpret the behaviour of the kernel weights within the model,
showing that in part our model intuitively selects different breathing
frequencies. The model proposed in this work could help to improve the
usefulness of consumer PPG-based wearables for medical applications, where
detailed respiratory information is required.
Related papers
- Frequency-Guided Masking for Enhanced Vision Self-Supervised Learning [49.275450836604726]
We present a novel frequency-based Self-Supervised Learning (SSL) approach that significantly enhances its efficacy for pre-training.
We employ a two-branch framework empowered by knowledge distillation, enabling the model to take both the filtered and original images as input.
arXiv Detail & Related papers (2024-09-16T15:10:07Z) - RepAugment: Input-Agnostic Representation-Level Augmentation for Respiratory Sound Classification [2.812716452984433]
This paper explores the efficacy of pretrained speech models for respiratory sound classification.
We find that there is a characterization gap between speech and lung sound samples, and to bridge this gap, data augmentation is essential.
We propose RepAugment, an input-agnostic representation-level augmentation technique that outperforms SpecAugment.
arXiv Detail & Related papers (2024-05-05T16:45:46Z) - Validated respiratory drug deposition predictions from 2D and 3D medical
images with statistical shape models and convolutional neural networks [47.187609203210705]
We aim to develop and validate an automated computational framework for patient-specific deposition modelling.
An image processing approach is proposed that could produce 3D patient respiratory geometries from 2D chest X-rays and 3D CT images.
arXiv Detail & Related papers (2023-03-02T07:47:07Z) - Image Synthesis with Disentangled Attributes for Chest X-Ray Nodule
Augmentation and Detection [52.93342510469636]
Lung nodule detection in chest X-ray (CXR) images is common to early screening of lung cancers.
Deep-learning-based Computer-Assisted Diagnosis (CAD) systems can support radiologists for nodule screening in CXR.
To alleviate the limited availability of such datasets, lung nodule synthesis methods are proposed for the sake of data augmentation.
arXiv Detail & Related papers (2022-07-19T16:38:48Z) - What Makes for Automatic Reconstruction of Pulmonary Segments [50.216231776343115]
3D reconstruction of pulmonary segments plays an important role in surgical treatment planning of lung cancer.
However, automatic reconstruction of pulmonary segments remains unexplored in the era of deep learning.
We propose ImPulSe, a deep implicit surface model designed for pulmonary segment reconstruction.
arXiv Detail & Related papers (2022-07-07T04:24:17Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - Sleep Apnea and Respiratory Anomaly Detection from a Wearable Band and
Oxygen Saturation [1.2291501047353484]
There is a need in general medicine and critical care for a more convenient method to automatically detect sleep apnea from a simple, easy-to-wear device.
The objective is to automatically detect abnormal respiration and estimate the Apnea-Hypopnea-Index (AHI) with a wearable respiratory device.
Four models were trained: one each using the respiratory features only, a feature from the SpO2 (%)-signal only, and two additional models that use the respiratory features and the SpO2 (%)-feature.
arXiv Detail & Related papers (2021-02-24T02:04:57Z) - 2-D Respiration Navigation Framework for 3-D Continuous Cardiac Magnetic
Resonance Imaging [61.701281723900216]
We propose a sampling adaption to acquire 2-D respiration information during a continuous scan.
We develop a pipeline to extract the different respiration states from the acquired signals, which are used to reconstruct data from one respiration phase.
arXiv Detail & Related papers (2020-12-26T08:29:57Z) - CNN-MoE based framework for classification of respiratory anomalies and
lung disease detection [33.45087488971683]
This paper presents and explores a robust deep learning framework for auscultation analysis.
It aims to classify anomalies in respiratory cycles and detect disease, from respiratory sound recordings.
arXiv Detail & Related papers (2020-04-04T21:45:06Z) - Abnormal respiratory patterns classifier may contribute to large-scale
screening of people infected with COVID-19 in an accurate and unobtrusive
manner [38.59200764343499]
During the epidemic prevention and control period, our study can be helpful in prognosis, diagnosis and screening for the patients infected with COVID-19.
Our study can be utilized to distinguish various respiratory patterns and our device can be preliminarily put to practical use.
arXiv Detail & Related papers (2020-02-12T09:42:57Z) - Robust Deep Learning Framework For Predicting Respiratory Anomalies and
Diseases [26.786743524562322]
This paper presents a robust deep learning framework developed to detect respiratory diseases from recordings of respiratory sounds.
A back-end deep learning model classifies the features into classes of respiratory disease or anomaly.
Experiments, conducted over the ICBHI benchmark dataset of respiratory sounds, evaluate the ability of the framework to classify sounds.
arXiv Detail & Related papers (2020-01-21T15:26:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.