Differential 3D Facial Recognition: Adding 3D to Your State-of-the-Art
2D Method
- URL: http://arxiv.org/abs/2004.03385v1
- Date: Fri, 3 Apr 2020 20:17:14 GMT
- Title: Differential 3D Facial Recognition: Adding 3D to Your State-of-the-Art
2D Method
- Authors: J. Matias Di Martino, Fernando Suzacq, Mauricio Delbracio, Qiang Qiu,
and Guillermo Sapiro
- Abstract summary: We show that it is possible to adopt active illumination to enhance state-of-the-art 2D face recognition approaches with 3D features.
The proposed ideas can significantly boost face recognition performance and dramatically improve the robustness to spoofing attacks.
- Score: 90.26041504667451
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Active illumination is a prominent complement to enhance 2D face recognition
and make it more robust, e.g., to spoofing attacks and low-light conditions. In
the present work we show that it is possible to adopt active illumination to
enhance state-of-the-art 2D face recognition approaches with 3D features, while
bypassing the complicated task of 3D reconstruction. The key idea is to project
over the test face a high spatial frequency pattern, which allows us to
simultaneously recover real 3D information plus a standard 2D facial image.
Therefore, state-of-the-art 2D face recognition solution can be transparently
applied, while from the high frequency component of the input image,
complementary 3D facial features are extracted. Experimental results on ND-2006
dataset show that the proposed ideas can significantly boost face recognition
performance and dramatically improve the robustness to spoofing attacks.
Related papers
- Fake It Without Making It: Conditioned Face Generation for Accurate 3D
Face Reconstruction [5.079602839359523]
We present a method to generate a large-scale synthesised dataset of 250K photorealistic images and their corresponding shape parameters and depth maps, which we call SynthFace.
Our synthesis method conditions Stable Diffusion on depth maps sampled from the FLAME 3D Morphable Model (3DMM) of the human face, allowing us to generate a diverse set of shape-consistent facial images that is designed to be balanced in race and gender.
We propose ControlFace, a deep neural network, trained on SynthFace, which achieves competitive performance on the NoW benchmark, without requiring 3D supervision or manual 3D asset creation.
arXiv Detail & Related papers (2023-07-25T16:42:06Z) - Improving 2D face recognition via fine-level facial depth generation and
RGB-D complementary feature learning [0.8223798883838329]
We propose a fine-grained facial depth generation network and an improved multimodal complementary feature learning network.
Experiments on the Lock3DFace dataset and the IIIT-D dataset show that the proposed FFDGNet and I MCFLNet can improve the accuracy of RGB-D face recognition.
arXiv Detail & Related papers (2023-05-08T02:33:59Z) - Towards Realistic Generative 3D Face Models [41.574628821637944]
This paper proposes a 3D controllable generative face model to produce high-quality albedo and precise 3D shape.
By combining 2D face generative models with semantic face manipulation, this method enables editing of detailed 3D rendered faces.
arXiv Detail & Related papers (2023-04-24T22:47:52Z) - Generating 2D and 3D Master Faces for Dictionary Attacks with a
Network-Assisted Latent Space Evolution [68.8204255655161]
A master face is a face image that passes face-based identity authentication for a high percentage of the population.
We optimize these faces for 2D and 3D face verification models.
In 3D, we generate faces using the 2D StyleGAN2 generator and predict a 3D structure using a deep 3D face reconstruction network.
arXiv Detail & Related papers (2022-11-25T09:15:38Z) - RiCS: A 2D Self-Occlusion Map for Harmonizing Volumetric Objects [68.85305626324694]
Ray-marching in Camera Space (RiCS) is a new method to represent the self-occlusions of foreground objects in 3D into a 2D self-occlusion map.
We show that our representation map not only allows us to enhance the image quality but also to model temporally coherent complex shadow effects.
arXiv Detail & Related papers (2022-05-14T05:35:35Z) - AvatarMe++: Facial Shape and BRDF Inference with Photorealistic
Rendering-Aware GANs [119.23922747230193]
We introduce the first method that is able to reconstruct render-ready 3D facial geometry and BRDF from a single "in-the-wild" image.
Our method outperforms the existing arts by a significant margin and reconstructs high-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2021-12-11T11:36:30Z) - Reconstructing A Large Scale 3D Face Dataset for Deep 3D Face
Identification [9.159921061636695]
We propose a framework of 2D-aided deep 3D face identification.
In particular, we propose to reconstruct millions of 3D face scans from a large scale 2D face database.
Our proposed approach achieves state-of-the-art rank-1 scores on the FRGC v2.0, Bosphorus, and BU-3DFE 3D face databases.
arXiv Detail & Related papers (2020-10-16T13:48:38Z) - Face Super-Resolution Guided by 3D Facial Priors [92.23902886737832]
We propose a novel face super-resolution method that explicitly incorporates 3D facial priors which grasp the sharp facial structures.
Our work is the first to explore 3D morphable knowledge based on the fusion of parametric descriptions of face attributes.
The proposed 3D priors achieve superior face super-resolution results over the state-of-the-arts.
arXiv Detail & Related papers (2020-07-18T15:26:07Z) - Adaptive 3D Face Reconstruction from a Single Image [45.736818498242016]
We propose a novel joint 2D and 3D optimization method to adaptively reconstruct 3D face shapes from a single image.
Experimental results on multiple datasets demonstrate that our method can generate high-quality reconstruction from a single color image.
arXiv Detail & Related papers (2020-07-08T09:35:26Z) - DeepFaceFlow: In-the-wild Dense 3D Facial Motion Estimation [56.56575063461169]
DeepFaceFlow is a robust, fast, and highly-accurate framework for the estimation of 3D non-rigid facial flow.
Our framework was trained and tested on two very large-scale facial video datasets.
Given registered pairs of images, our framework generates 3D flow maps at 60 fps.
arXiv Detail & Related papers (2020-05-14T23:56:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.