Facial Kinship Verification from remote photoplethysmography
- URL: http://arxiv.org/abs/2309.08006v2
- Date: Fri, 15 Mar 2024 03:16:30 GMT
- Title: Facial Kinship Verification from remote photoplethysmography
- Authors: Xiaoting Wu, Xiaoyi Feng, Constantino Álvarez Casado, Lili Liu, Miguel Bordallo López,
- Abstract summary: Kinship Verification (FKV) aims at automatically determining whether two subjects have a kinship relation based on human faces.
Traditional FKV faces challenges as it is vulnerable to spoof attacks and raises privacy issues.
In this paper, we explore for the first time the FKV with vital bio-signals, focusing on remote Photoplethys reflection.
- Score: 8.212664345436092
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Facial Kinship Verification (FKV) aims at automatically determining whether two subjects have a kinship relation based on human faces. It has potential applications in finding missing children and social media analysis. Traditional FKV faces challenges as it is vulnerable to spoof attacks and raises privacy issues. In this paper, we explore for the first time the FKV with vital bio-signals, focusing on remote Photoplethysmography (rPPG). rPPG signals are extracted from facial videos, resulting in a one-dimensional signal that measures the changes in visible light reflection emitted to and detected from the skin caused by the heartbeat. Specifically, in this paper, we employed a straightforward one-dimensional Convolutional Neural Network (1DCNN) with a 1DCNN-Attention module and kinship contrastive loss to learn the kin similarity from rPPGs. The network takes multiple rPPG signals extracted from various facial Regions of Interest (ROIs) as inputs. Additionally, the 1DCNN attention module is designed to learn and capture the discriminative kin features from feature embeddings. Finally, we demonstrate the feasibility of rPPG to detect kinship with the experiment evaluation on the UvANEMO Smile Database from different kin relations.
Related papers
- Biometric Authentication Based on Enhanced Remote Photoplethysmography Signal Morphology [31.017229351857655]
Remote photoplesthymography (rmography) is a non-contact method for measuring cardiac signals from facial videos.
Recent studies have shown that each individual possesses a unique c signal morphology that can be utilized as a biometric identifier.
Our approach needs only de-identified facial videos with subject IDs to train r authentication models.
arXiv Detail & Related papers (2024-07-04T19:00:34Z) - Dual-path TokenLearner for Remote Photoplethysmography-based
Physiological Measurement with Facial Videos [24.785755814666086]
This paper utilizes the concept of learnable tokens to integrate both spatial and temporal informative contexts from the global perspective of the video.
A Temporal TokenLearner (TTL) is designed to infer the quasi-periodic pattern of heartbeats, which eliminates temporal disturbances such as head movements.
arXiv Detail & Related papers (2023-08-15T13:45:45Z) - Privacy-Preserving Remote Heart Rate Estimation from Facial Videos [19.442685015494316]
Deep learning techniques are vulnerable to perturbation attacks, which can result in significant data breaches.
We propose a data method that involves extraction of certain areas of the face with less identity-related information, followed by pixel shuffling and blurring.
Our approach reduces the accuracy of facial recognition algorithms by over 60%, with minimal impact on r extraction.
arXiv Detail & Related papers (2023-06-01T20:48:04Z) - Benchmarking Joint Face Spoofing and Forgery Detection with Visual and
Physiological Cues [81.15465149555864]
We establish the first joint face spoofing and detection benchmark using both visual appearance and physiological r cues.
To enhance the r periodicity discrimination, we design a two-branch physiological network using both facial powerful rtemporal signal map and its continuous wavelet transformed counterpart as inputs.
arXiv Detail & Related papers (2022-08-10T15:41:48Z) - Identifying Rhythmic Patterns for Face Forgery Detection and
Categorization [46.21354355137544]
We propose a framework for face forgery detection and categorization consisting of: 1) a Spatial-Temporal Filtering Network (STFNet) for PPG signals, and 2) a Spatial-Temporal Interaction Network (STINet) for constraint and interaction of PPG signals.
With insight into the generation of forgery methods, we further propose intra-source and inter-source blending to boost the performance of the framework.
arXiv Detail & Related papers (2022-07-04T04:57:06Z) - An Adversarial Human Pose Estimation Network Injected with Graph
Structure [75.08618278188209]
In this paper, we design a novel generative adversarial network (GAN) to improve the localization accuracy of visible joints when some joints are invisible.
The network consists of two simple but efficient modules, Cascade Feature Network (CFN) and Graph Structure Network (GSN)
arXiv Detail & Related papers (2021-03-29T12:07:08Z) - Retinopathy of Prematurity Stage Diagnosis Using Object Segmentation and
Convolutional Neural Networks [68.96150598294072]
Retinopathy of Prematurity (ROP) is an eye disorder primarily affecting premature infants with lower weights.
It causes proliferation of vessels in the retina and could result in vision loss and, eventually, retinal detachment, leading to blindness.
In recent years, there has been a significant effort to automate the diagnosis using deep learning.
This paper builds upon the success of previous models and develops a novel architecture, which combines object segmentation and convolutional neural networks (CNN)
Our proposed system first trains an object segmentation model to identify the demarcation line at a pixel level and adds the resulting mask as an additional "color" channel in
arXiv Detail & Related papers (2020-04-03T14:07:41Z) - Spectrum Translation for Cross-Spectral Ocular Matching [59.17685450892182]
Cross-spectral verification remains a big issue in biometrics, especially for the ocular area.
We investigate the use of Conditional Adversarial Networks for spectrum translation between near infra-red and visual light images for ocular biometrics.
arXiv Detail & Related papers (2020-02-14T19:30:31Z) - Joint Deep Learning of Facial Expression Synthesis and Recognition [97.19528464266824]
We propose a novel joint deep learning of facial expression synthesis and recognition method for effective FER.
The proposed method involves a two-stage learning procedure. Firstly, a facial expression synthesis generative adversarial network (FESGAN) is pre-trained to generate facial images with different facial expressions.
In order to alleviate the problem of data bias between the real images and the synthetic images, we propose an intra-class loss with a novel real data-guided back-propagation (RDBP) algorithm.
arXiv Detail & Related papers (2020-02-06T10:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.