CodePhys: Robust Video-based Remote Physiological Measurement through Latent Codebook Querying
- URL: http://arxiv.org/abs/2502.07526v1
- Date: Tue, 11 Feb 2025 13:05:42 GMT
- Title: CodePhys: Robust Video-based Remote Physiological Measurement through Latent Codebook Querying
- Authors: Shuyang Chu, Menghan Xia, Mengyao Yuan, Xin Liu, Tapio Seppanen, Guoying Zhao, Jingang Shi,
- Abstract summary: Remote photoplethysmography aims to measure non-contact physiological signals from facial videos.
Most existing methods directly extract video-based r features by designing neural networks for heart rate estimation.
Recent methods are easily affected by interference and degradation, resulting in noisy r signals.
We propose a novel method named CodePhys, which innovatively treats r measurement as a code task in a noise-free proxy space.
- Score: 26.97093819822487
- License:
- Abstract: Remote photoplethysmography (rPPG) aims to measure non-contact physiological signals from facial videos, which has shown great potential in many applications. Most existing methods directly extract video-based rPPG features by designing neural networks for heart rate estimation. Although they can achieve acceptable results, the recovery of rPPG signal faces intractable challenges when interference from real-world scenarios takes place on facial video. Specifically, facial videos are inevitably affected by non-physiological factors (e.g., camera device noise, defocus, and motion blur), leading to the distortion of extracted rPPG signals. Recent rPPG extraction methods are easily affected by interference and degradation, resulting in noisy rPPG signals. In this paper, we propose a novel method named CodePhys, which innovatively treats rPPG measurement as a code query task in a noise-free proxy space (i.e., codebook) constructed by ground-truth PPG signals. We consider noisy rPPG features as queries and generate high-fidelity rPPG features by matching them with noise-free PPG features from the codebook. Our approach also incorporates a spatial-aware encoder network with a spatial attention mechanism to highlight physiologically active areas and uses a distillation loss to reduce the influence of non-periodic visual interference. Experimental results on four benchmark datasets demonstrate that CodePhys outperforms state-of-the-art methods in both intra-dataset and cross-dataset settings.
Related papers
- Partitioning Message Passing for Graph Fraud Detection [57.928658584067556]
Label imbalance and homophily-heterophily mixture are the fundamental problems encountered when applying Graph Neural Networks (GNNs) to Graph Fraud Detection (GFD) tasks.
Existing GNN-based GFD models are designed to augment graph structure to accommodate the inductive bias of GNNs towards homophily.
In our work, we argue that the key to applying GNNs for GFD is not to exclude but to em distinguish neighbors with different labels.
arXiv Detail & Related papers (2024-11-16T11:30:53Z) - Mask Attack Detection Using Vascular-weighted Motion-robust rPPG Signals [21.884783786547782]
R-based face anti-spoofing methods often suffer from performance degradation due to unstable face alignment in the video sequence.
A landmark-anchored face stitching method is proposed to align the faces robustly and precisely at the pixel-wise level by using both SIFT keypoints and facial landmarks.
A lightweight EfficientNet with a Gated Recurrent Unit (GRU) is designed to extract both spatial and temporal features for classification.
arXiv Detail & Related papers (2023-05-25T11:22:17Z) - PhysFormer++: Facial Video-based Physiological Measurement with SlowFast
Temporal Difference Transformer [76.40106756572644]
Recent deep learning approaches focus on mining subtle clues using convolutional neural networks with limited-temporal receptive fields.
In this paper, we propose two end-to-end video transformer based on PhysFormer and Phys++++, to adaptively aggregate both local and global features for r representation enhancement.
Comprehensive experiments are performed on four benchmark datasets to show our superior performance on both intra-temporal and cross-dataset testing.
arXiv Detail & Related papers (2023-02-07T15:56:03Z) - Facial Video-based Remote Physiological Measurement via Self-supervised
Learning [9.99375728024877]
We introduce a novel framework that learns to estimate r signals from facial videos without the need of ground truth signals.
Negative samples are generated via a learnable frequency module, which performs nonlinear signal frequency transformation.
Next, we introduce a local r expert aggregation module to estimate r signals from augmented samples.
It encodes complementary pulsation information from different face regions and aggregate them into one r prediction.
arXiv Detail & Related papers (2022-10-27T13:03:23Z) - Benchmarking Joint Face Spoofing and Forgery Detection with Visual and
Physiological Cues [81.15465149555864]
We establish the first joint face spoofing and detection benchmark using both visual appearance and physiological r cues.
To enhance the r periodicity discrimination, we design a two-branch physiological network using both facial powerful rtemporal signal map and its continuous wavelet transformed counterpart as inputs.
arXiv Detail & Related papers (2022-08-10T15:41:48Z) - WPPG Net: A Non-contact Video Based Heart Rate Extraction Network
Framework with Compatible Training Capability [21.33542693986985]
Our facial skin presents subtle color change known as remote Photoplethys (r) signal, from which we could extract the heart rate of the subject.
Recently many deep learning methods and related datasets on r signal extraction are proposed.
However, because of the time consumption blood flowing through our body and other factors, label waves such as BVP signals have uncertain delays with real r signals in some datasets.
In this paper, by analyzing the common characteristics on rhythm and periodicity of r signals and label waves, we propose a whole set of training methodology which wraps these networks so that they could remain efficient when be trained at
arXiv Detail & Related papers (2022-07-04T19:52:30Z) - Identifying Rhythmic Patterns for Face Forgery Detection and
Categorization [46.21354355137544]
We propose a framework for face forgery detection and categorization consisting of: 1) a Spatial-Temporal Filtering Network (STFNet) for PPG signals, and 2) a Spatial-Temporal Interaction Network (STINet) for constraint and interaction of PPG signals.
With insight into the generation of forgery methods, we further propose intra-source and inter-source blending to boost the performance of the framework.
arXiv Detail & Related papers (2022-07-04T04:57:06Z) - DRNet: Decomposition and Reconstruction Network for Remote Physiological
Measurement [39.73408626273354]
Existing methods are generally divided into two groups.
The first focuses on mining the subtle volume pulse (BVP) signals from face videos, but seldom explicitly models the noises that dominate face video content.
The second focuses on modeling noisy data directly, resulting in suboptimal performance due to the lack of regularity of these severe random noises.
arXiv Detail & Related papers (2022-06-12T07:40:10Z) - PhysFormer: Facial Video-based Physiological Measurement with Temporal
Difference Transformer [55.936527926778695]
Recent deep learning approaches focus on mining subtle r clues using convolutional neural networks with limited-temporal receptive fields.
In this paper, we propose the PhysFormer, an end-to-end video transformer based architecture.
arXiv Detail & Related papers (2021-11-23T18:57:11Z) - Video-based Remote Physiological Measurement via Cross-verified Feature
Disentangling [121.50704279659253]
We propose a cross-verified feature disentangling strategy to disentangle the physiological features with non-physiological representations.
We then use the distilled physiological features for robust multi-task physiological measurements.
The disentangled features are finally used for the joint prediction of multiple physiological signals like average HR values and r signals.
arXiv Detail & Related papers (2020-07-16T09:39:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.