Assessment of Deep Learning-based Heart Rate Estimation using Remote
Photoplethysmography under Different Illuminations
- URL: http://arxiv.org/abs/2107.13193v1
- Date: Wed, 28 Jul 2021 06:50:52 GMT
- Title: Assessment of Deep Learning-based Heart Rate Estimation using Remote
Photoplethysmography under Different Illuminations
- Authors: Ze Yang, Haofei Wang, Feng Lu
- Abstract summary: We present a public dataset, namely the BH-r dataset, which contains data from twelve subjects under three illuminations: low, medium, and high.
We evaluate the performance of three deep learning-based methods using two public datasets: the U-r dataset and the BH-r dataset.
- Score: 17.60589015651357
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Remote photoplethysmography (rPPG) monitors heart rate without requiring
physical contact, which allows for a wide variety of applications. Deep
learning-based rPPG have demonstrated superior performance over the traditional
approaches in controlled context. However, the lighting situation in indoor
space is typically complex, with uneven light distribution and frequent
variations in illumination. It lacks a fair comparison of different methods
under different illuminations using the same dataset. In this paper, we present
a public dataset, namely the BH-rPPG dataset, which contains data from twelve
subjects under three illuminations: low, medium, and high illumination. We also
provide the ground truth heart rate measured by an oximeter. We evaluate the
performance of three deep learning-based methods to that of four traditional
methods using two public datasets: the UBFC-rPPG dataset and the BH-rPPG
dataset. The experimental results demonstrate that traditional methods are
generally more resistant to fluctuating illuminations. We found that the
rPPGNet achieves lowest MAE among deep learning-based method under medium
illumination, whereas the CHROM achieves 1.5 beats per minute (BPM),
outperforming the rPPGNet by 60%. These findings suggest that while developing
deep learning-based heart rate estimation algorithms, illumination variation
should be taken into account. This work serves as a benchmark for rPPG
performance evaluation and it opens a pathway for future investigation into
deep learning-based rPPG under illumination variations.
Related papers
- Generalization of Video-Based Heart Rate Estimation Methods To Low Illumination and Elevated Heart Rates [3.8886059978578595]
Heart rate is a physiological signal that provides information about an individual's health and affective state.
We evaluate representative state-of-the-art methods for estimation of heart rate using remote photoplethysmography (r)
Our experimental results indicate that classical methods are not significantly impacted by low-light conditions.
Some deep learning methods were found to be more robust to changes in lighting conditions but encountered challenges in estimating high heart rates.
arXiv Detail & Related papers (2025-03-11T18:29:10Z) - Domain Generalization for Endoscopic Image Segmentation by Disentangling Style-Content Information and SuperPixel Consistency [1.4991956341367338]
We propose an approach for style-content disentanglement using instance normalization and instance selective whitening (ISW) for improved domain generalization.
We evaluate our approach on two datasets: EndoUDA Barrett's Esophagus and EndoUDA polyps, and compare its performance with three state-of-the-art (SOTA) methods.
arXiv Detail & Related papers (2024-09-19T04:10:04Z) - Robust Depth Enhancement via Polarization Prompt Fusion Tuning [112.88371907047396]
We present a framework that leverages polarization imaging to improve inaccurate depth measurements from various depth sensors.
Our method first adopts a learning-based strategy where a neural network is trained to estimate a dense and complete depth map from polarization data and a sensor depth map from different sensors.
To further improve the performance, we propose a Polarization Prompt Fusion Tuning (PPFT) strategy to effectively utilize RGB-based models pre-trained on large-scale datasets.
arXiv Detail & Related papers (2024-04-05T17:55:33Z) - Phase Unwrapping of Color Doppler Echocardiography using Deep Learning [1.3534683694551501]
We develop an unfolded primal-dual network to unwrap (dealias) color Doppler echocardiographic images.
We compare its effectiveness against two state-of-the-art segmentation approaches based on nnU-Net and transformer models.
Our results suggest that deep learning-based methods can effectively remove aliasing artifacts in color Doppler echocardiographic images.
arXiv Detail & Related papers (2023-06-23T13:23:03Z) - Promoting Generalization in Cross-Dataset Remote Photoplethysmography [1.422288795020666]
Remote Photoplethysmography, or the remote monitoring of a subject's heart rate using a camera, has seen a shift from handcrafted techniques to deep learning models.
We show that these models tend to learn a bias to pulse wave features inherent to the training dataset.
We develop augmentations to this learned bias by expanding both the range and variability of heart rates that the model sees while training, resulting in improved model convergence.
arXiv Detail & Related papers (2023-05-24T14:35:54Z) - Image Enhancement for Remote Photoplethysmography in a Low-Light
Environment [13.740047263242575]
The accuracy of remote heart rate monitoring technology has been significantly improved.
Despite the significant algorithmic advances, the performance of r algorithm can degrade in the long-term.
Insufficient lighting in video capturing hurts quality of physiological signal.
The proposed solution for r process is effective to detect and improve the signal-to-noise ratio and precision of the pulsatile signal.
arXiv Detail & Related papers (2023-03-16T14:18:48Z) - Retinexformer: One-stage Retinex-based Transformer for Low-light Image
Enhancement [96.09255345336639]
We formulate a principled One-stage Retinex-based Framework (ORF) to enhance low-light images.
ORF first estimates the illumination information to light up the low-light image and then restores the corruption to produce the enhanced image.
Our algorithm, Retinexformer, significantly outperforms state-of-the-art methods on thirteen benchmarks.
arXiv Detail & Related papers (2023-03-12T16:54:08Z) - Self-Supervised Light Field Depth Estimation Using Epipolar Plane Images [13.137957601685041]
We propose a self-supervised learning framework for light field depth estimation.
Compared with other state-of-the-art methods, the proposed method can also obtain higher quality results in real-world scenarios.
arXiv Detail & Related papers (2022-03-29T01:18:59Z) - Occlusion-aware Unsupervised Learning of Depth from 4-D Light Fields [50.435129905215284]
We present an unsupervised learning-based depth estimation method for 4-D light field processing and analysis.
Based on the basic knowledge of the unique geometry structure of light field data, we explore the angular coherence among subsets of the light field views to estimate depth maps.
Our method can significantly shrink the performance gap between the previous unsupervised method and supervised ones, and produce depth maps with comparable accuracy to traditional methods with obviously reduced computational cost.
arXiv Detail & Related papers (2021-06-06T06:19:50Z) - Lighting the Darkness in the Deep Learning Era [118.35081853500411]
Low-light image enhancement (LLIE) aims at improving the perception or interpretability of an image captured in an environment with poor illumination.
Recent advances in this area are dominated by deep learning-based solutions.
We provide a comprehensive survey to cover various aspects ranging from algorithm taxonomy to unsolved open issues.
arXiv Detail & Related papers (2021-04-21T19:12:19Z) - Depth image denoising using nuclear norm and learning graph model [107.51199787840066]
Group-based image restoration methods are more effective in gathering the similarity among patches.
For each patch, we find and group the most similar patches within a searching window.
The proposed method is superior to other current state-of-the-art denoising methods in both subjective and objective criterion.
arXiv Detail & Related papers (2020-08-09T15:12:16Z) - Single Image Depth Estimation Trained via Depth from Defocus Cues [105.67073923825842]
Estimating depth from a single RGB image is a fundamental task in computer vision.
In this work, we rely, instead of different views, on depth from focus cues.
We present results that are on par with supervised methods on KITTI and Make3D datasets and outperform unsupervised learning approaches.
arXiv Detail & Related papers (2020-01-14T20:22:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.