Synthetic Data for Multi-Parameter Camera-Based Physiological Sensing
- URL: http://arxiv.org/abs/2110.04902v1
- Date: Sun, 10 Oct 2021 20:51:54 GMT
- Title: Synthetic Data for Multi-Parameter Camera-Based Physiological Sensing
- Authors: Daniel McDuff, Xin Liu, Javier Hernandez, Erroll Wood, Tadas
Baltrusaitis
- Abstract summary: We leverage a high-fidelity synthetics pipeline for generating videos of faces with faithful blood flow and breathing patterns.
We provide empirical evidence that heart and breathing rate measurement accuracy increases with the number of synthetic avatars in the training set.
We discuss the opportunities that synthetics present in the domain of camera-based physiological sensing.
- Score: 19.81916022915307
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Synthetic data is a powerful tool in training data hungry deep learning
algorithms. However, to date, camera-based physiological sensing has not taken
full advantage of these techniques. In this work, we leverage a high-fidelity
synthetics pipeline for generating videos of faces with faithful blood flow and
breathing patterns. We present systematic experiments showing how
physiologically-grounded synthetic data can be used in training camera-based
multi-parameter cardiopulmonary sensing. We provide empirical evidence that
heart and breathing rate measurement accuracy increases with the number of
synthetic avatars in the training set. Furthermore, training with avatars with
darker skin types leads to better overall performance than training with
avatars with lighter skin types. Finally, we discuss the opportunities that
synthetics present in the domain of camera-based physiological sensing and
limitations that need to be overcome.
Related papers
- Is Synthetic Image Useful for Transfer Learning? An Investigation into Data Generation, Volume, and Utilization [62.157627519792946]
We introduce a novel framework called bridged transfer, which initially employs synthetic images for fine-tuning a pre-trained model to improve its transferability.
We propose dataset style inversion strategy to improve the stylistic alignment between synthetic and real images.
Our proposed methods are evaluated across 10 different datasets and 5 distinct models, demonstrating consistent improvements.
arXiv Detail & Related papers (2024-03-28T22:25:05Z) - Learning with Chemical versus Electrical Synapses -- Does it Make a
Difference? [61.85704286298537]
Bio-inspired neural networks have the potential to advance our understanding of neural computation and improve the state-of-the-art of AI systems.
We conduct experiments with autonomous lane-keeping through a photorealistic autonomous driving simulator to evaluate their performance under diverse conditions.
arXiv Detail & Related papers (2023-11-21T13:07:20Z) - Training Robust Deep Physiological Measurement Models with Synthetic
Video-based Data [11.31971398273479]
We propose measures to add real-world noise to synthetic physiological signals and corresponding facial videos.
Our results show that we were able to reduce the average MAE from 6.9 to 2.0.
arXiv Detail & Related papers (2023-11-09T13:55:45Z) - UAV-Sim: NeRF-based Synthetic Data Generation for UAV-based Perception [62.71374902455154]
We leverage recent advancements in neural rendering to improve static and dynamic novelview UAV-based image rendering.
We demonstrate a considerable performance boost when a state-of-the-art detection model is optimized primarily on hybrid sets of real and synthetic data.
arXiv Detail & Related papers (2023-10-25T00:20:37Z) - ContraNeRF: Generalizable Neural Radiance Fields for Synthetic-to-real
Novel View Synthesis via Contrastive Learning [102.46382882098847]
We first investigate the effects of synthetic data in synthetic-to-real novel view synthesis.
We propose to introduce geometry-aware contrastive learning to learn multi-view consistent features with geometric constraints.
Our method can render images with higher quality and better fine-grained details, outperforming existing generalizable novel view synthesis methods in terms of PSNR, SSIM, and LPIPS.
arXiv Detail & Related papers (2023-03-20T12:06:14Z) - ChemVise: Maximizing Out-of-Distribution Chemical Detection with the
Novel Application of Zero-Shot Learning [60.02503434201552]
This research proposes learning approximations of complex exposures from training sets of simple ones.
We demonstrate this approach to synthetic sensor responses surprisingly improves the detection of out-of-distribution obscured chemical analytes.
arXiv Detail & Related papers (2023-02-09T20:19:57Z) - SYNTA: A novel approach for deep learning-based image analysis in muscle
histopathology using photo-realistic synthetic data [2.1616289178832666]
We introduce SYNTA (synthetic data) as a novel approach for the generation of synthetic, photo-realistic, and highly complex biomedical images as training data.
We demonstrate that it is possible to perform robust and expert-level segmentation tasks on previously unseen real-world data, without the need for manual annotations.
arXiv Detail & Related papers (2022-07-29T12:50:32Z) - SyntheX: Scaling Up Learning-based X-ray Image Analysis Through In
Silico Experiments [12.019996672009375]
We show that creating realistic simulated images from human models is a viable alternative to large-scale in situ data collection.
Because synthetic generation of training data from human-based models scales easily, we find that our model transfer paradigm for X-ray image analysis, which we refer to as SyntheX, can even outperform real data-trained models.
arXiv Detail & Related papers (2022-06-13T13:08:41Z) - SCAMPS: Synthetics for Camera Measurement of Physiological Signals [17.023803380199492]
We present SCAMPS, a dataset of synthetics containing 2,800 videos (1.68M frames) with aligned cardiac and respiratory signals and facial action intensities.
We provide descriptive statistics about the underlying waveforms, including inter-beat interval, heart rate variability, and pulse arrival time.
arXiv Detail & Related papers (2022-06-08T23:48:41Z) - Synthetic Data for Model Selection [2.4499092754102874]
We show that synthetic data can be beneficial for model selection.
We introduce a novel method to calibrate the synthetic error estimation to fit that of the real domain.
arXiv Detail & Related papers (2021-05-03T09:52:03Z) - Where is my hand? Deep hand segmentation for visual self-recognition in
humanoid robots [129.46920552019247]
We propose the use of a Convolution Neural Network (CNN) to segment the robot hand from an image in an egocentric view.
We fine-tuned the Mask-RCNN network for the specific task of segmenting the hand of the humanoid robot Vizzy.
arXiv Detail & Related papers (2021-02-09T10:34:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.