SCAMPS: Synthetics for Camera Measurement of Physiological Signals
- URL: http://arxiv.org/abs/2206.04197v1
- Date: Wed, 8 Jun 2022 23:48:41 GMT
- Title: SCAMPS: Synthetics for Camera Measurement of Physiological Signals
- Authors: Daniel McDuff, Miah Wander, Xin Liu, Brian L. Hill, Javier Hernandez,
Jonathan Lester, Tadas Baltrusaitis
- Abstract summary: We present SCAMPS, a dataset of synthetics containing 2,800 videos (1.68M frames) with aligned cardiac and respiratory signals and facial action intensities.
We provide descriptive statistics about the underlying waveforms, including inter-beat interval, heart rate variability, and pulse arrival time.
- Score: 17.023803380199492
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The use of cameras and computational algorithms for noninvasive, low-cost and
scalable measurement of physiological (e.g., cardiac and pulmonary) vital signs
is very attractive. However, diverse data representing a range of environments,
body motions, illumination conditions and physiological states is laborious,
time consuming and expensive to obtain. Synthetic data have proven a valuable
tool in several areas of machine learning, yet are not widely available for
camera measurement of physiological states. Synthetic data offer "perfect"
labels (e.g., without noise and with precise synchronization), labels that may
not be possible to obtain otherwise (e.g., precise pixel level segmentation
maps) and provide a high degree of control over variation and diversity in the
dataset. We present SCAMPS, a dataset of synthetics containing 2,800 videos
(1.68M frames) with aligned cardiac and respiratory signals and facial action
intensities. The RGB frames are provided alongside segmentation maps. We
provide precise descriptive statistics about the underlying waveforms,
including inter-beat interval, heart rate variability, and pulse arrival time.
Finally, we present baseline results training on these synthetic data and
testing on real-world datasets to illustrate generalizability.
Related papers
- Scaling Wearable Foundation Models [54.93979158708164]
We investigate the scaling properties of sensor foundation models across compute, data, and model size.
Using a dataset of up to 40 million hours of in-situ heart rate, heart rate variability, electrodermal activity, accelerometer, skin temperature, and altimeter per-minute data from over 165,000 people, we create LSM.
Our results establish the scaling laws of LSM for tasks such as imputation, extrapolation, both across time and sensor modalities.
arXiv Detail & Related papers (2024-10-17T15:08:21Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - Training Robust Deep Physiological Measurement Models with Synthetic
Video-based Data [11.31971398273479]
We propose measures to add real-world noise to synthetic physiological signals and corresponding facial videos.
Our results show that we were able to reduce the average MAE from 6.9 to 2.0.
arXiv Detail & Related papers (2023-11-09T13:55:45Z) - Graph-Aware Contrasting for Multivariate Time-Series Classification [50.84488941336865]
Existing contrastive learning methods mainly focus on achieving temporal consistency with temporal augmentation and contrasting techniques.
We propose Graph-Aware Contrasting for spatial consistency across MTS data.
Our proposed method achieves state-of-the-art performance on various MTS classification tasks.
arXiv Detail & Related papers (2023-09-11T02:35:22Z) - Remote Bio-Sensing: Open Source Benchmark Framework for Fair Evaluation
of rPPG [2.82697733014759]
r (pg photoplethysmography) is a technology that measures and analyzes BVP (Blood Volume Pulse) by using the light absorption characteristics of hemoglobin captured through a camera.
This study is to provide a framework to evaluate various r benchmarking techniques across a wide range of datasets for fair evaluation and comparison.
arXiv Detail & Related papers (2023-07-24T09:35:47Z) - SIAN: Style-Guided Instance-Adaptive Normalization for Multi-Organ
Histopathology Image Synthesis [63.845552349914186]
We propose a style-guided instance-adaptive normalization (SIAN) to synthesize realistic color distributions and textures for different organs.
The four phases work together and are integrated into a generative network to embed image semantics, style, and instance-level boundaries.
arXiv Detail & Related papers (2022-09-02T16:45:46Z) - OADAT: Experimental and Synthetic Clinical Optoacoustic Data for
Standardized Image Processing [62.993663757843464]
Optoacoustic (OA) imaging is based on excitation of biological tissues with nanosecond-duration laser pulses followed by detection of ultrasound waves generated via light-absorption-mediated thermoelastic expansion.
OA imaging features a powerful combination between rich optical contrast and high resolution in deep tissues.
No standardized datasets generated with different types of experimental set-up and associated processing methods are available to facilitate advances in broader applications of OA in clinical settings.
arXiv Detail & Related papers (2022-06-17T08:11:26Z) - Synthetic Data for Multi-Parameter Camera-Based Physiological Sensing [19.81916022915307]
We leverage a high-fidelity synthetics pipeline for generating videos of faces with faithful blood flow and breathing patterns.
We provide empirical evidence that heart and breathing rate measurement accuracy increases with the number of synthetic avatars in the training set.
We discuss the opportunities that synthetics present in the domain of camera-based physiological sensing.
arXiv Detail & Related papers (2021-10-10T20:51:54Z) - Synthesizing Skeletal Motion and Physiological Signals as a Function of
a Virtual Human's Actions and Emotions [10.59409233835301]
We develop for the first time a system consisting of computational models for synchronously skeletal motion, electrocardiogram, blood pressure, respiration, and skin conductance signals.
The proposed framework is modular and allows the flexibility to experiment with different models.
In addition to facilitating ML research for round-the-clock monitoring at a reduced cost, the proposed framework will allow reusability of code and data.
arXiv Detail & Related papers (2021-02-08T21:56:15Z) - Video-based Remote Physiological Measurement via Cross-verified Feature
Disentangling [121.50704279659253]
We propose a cross-verified feature disentangling strategy to disentangle the physiological features with non-physiological representations.
We then use the distilled physiological features for robust multi-task physiological measurements.
The disentangled features are finally used for the joint prediction of multiple physiological signals like average HR values and r signals.
arXiv Detail & Related papers (2020-07-16T09:39:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.