Analyzing the Impact of Shape & Context on the Face Recognition
Performance of Deep Networks
- URL: http://arxiv.org/abs/2208.02991v1
- Date: Fri, 5 Aug 2022 05:32:07 GMT
- Title: Analyzing the Impact of Shape & Context on the Face Recognition
Performance of Deep Networks
- Authors: Sandipan Banerjee, Walter Scheirer, Kevin Bowyer, Patrick Flynn
- Abstract summary: We analyze how changing the underlying 3D shape of the base identity in face images can distort their overall appearance.
Our experiments demonstrate the significance of facial shape in accurate face matching and underpin the importance of contextual data for network training.
- Score: 2.0099255688059907
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this article, we analyze how changing the underlying 3D shape of the base
identity in face images can distort their overall appearance, especially from
the perspective of deep face recognition. As done in popular training data
augmentation schemes, we graphically render real and synthetic face images with
randomly chosen or best-fitting 3D face models to generate novel views of the
base identity. We compare deep features generated from these images to assess
the perturbation these renderings introduce into the original identity. We
perform this analysis at various degrees of facial yaw with the base identities
varying in gender and ethnicity. Additionally, we investigate if adding some
form of context and background pixels in these rendered images, when used as
training data, further improves the downstream performance of a face
recognition model. Our experiments demonstrate the significance of facial shape
in accurate face matching and underpin the importance of contextual data for
network training.
Related papers
- Face Reconstruction from Face Embeddings using Adapter to a Face Foundation Model [24.72209930285057]
Face recognition systems extract embedding vectors from face images and use these embeddings to verify or identify individuals.
Face reconstruction attack (also known as template inversion) refers to reconstructing face images from face embeddings and using the reconstructed face image to enter a face recognition system.
We propose to use a face foundation model to reconstruct face images from the embeddings of a blackbox face recognition model.
arXiv Detail & Related papers (2024-11-06T14:45:41Z) - DeepFidelity: Perceptual Forgery Fidelity Assessment for Deepfake
Detection [67.3143177137102]
Deepfake detection refers to detecting artificially generated or edited faces in images or videos.
We propose a novel Deepfake detection framework named DeepFidelity to adaptively distinguish real and fake faces.
arXiv Detail & Related papers (2023-12-07T07:19:45Z) - Face Transformer: Towards High Fidelity and Accurate Face Swapping [54.737909435708936]
Face swapping aims to generate swapped images that fuse the identity of source faces and the attributes of target faces.
This paper presents Face Transformer, a novel face swapping network that can accurately preserve source identities and target attributes simultaneously.
arXiv Detail & Related papers (2023-04-05T15:51:44Z) - Evaluation of Human and Machine Face Detection using a Novel Distinctive
Human Appearance Dataset [0.76146285961466]
We evaluate current state-of-the-art face-detection models in their ability to detect faces in images.
The evaluation results show that face-detection algorithms do not generalize well to diverse appearances.
arXiv Detail & Related papers (2021-11-01T02:20:40Z) - SynFace: Face Recognition with Synthetic Data [83.15838126703719]
We devise the SynFace with identity mixup (IM) and domain mixup (DM) to mitigate the performance gap.
We also perform a systematically empirical analysis on synthetic face images to provide some insights on how to effectively utilize synthetic data for face recognition.
arXiv Detail & Related papers (2021-08-18T03:41:54Z) - Learning to Aggregate and Personalize 3D Face from In-the-Wild Photo
Collection [65.92058628082322]
Non-parametric face modeling aims to reconstruct 3D face only from images without shape assumptions.
This paper presents a novel Learning to Aggregate and Personalize framework for unsupervised robust 3D face modeling.
arXiv Detail & Related papers (2021-06-15T03:10:17Z) - Robust Face-Swap Detection Based on 3D Facial Shape Information [59.32489266682952]
Face-swap images and videos have attracted more and more malicious attackers to discredit some key figures.
Previous pixel-level artifacts based detection techniques always focus on some unclear patterns but ignore some available semantic clues.
We propose a biometric information based method to fully exploit the appearance and shape feature for face-swap detection of key figures.
arXiv Detail & Related papers (2021-04-28T09:35:48Z) - VAE/WGAN-Based Image Representation Learning For Pose-Preserving
Seamless Identity Replacement In Facial Images [15.855376604558977]
We present a novel variational generative adversarial network (VGAN) based on Wasserstein loss.
We show that our network can be used to perform pose-preserving identity morphing and identity-preserving pose morphing.
arXiv Detail & Related papers (2020-03-02T03:35:59Z) - DotFAN: A Domain-transferred Face Augmentation Network for Pose and
Illumination Invariant Face Recognition [94.96686189033869]
We propose a 3D model-assisted domain-transferred face augmentation network (DotFAN)
DotFAN can generate a series of variants of an input face based on the knowledge distilled from existing rich face datasets collected from other domains.
Experiments show that DotFAN is beneficial for augmenting small face datasets to improve their within-class diversity.
arXiv Detail & Related papers (2020-02-23T08:16:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.