3D Face Morphing Attacks: Generation, Vulnerability and Detection
- URL: http://arxiv.org/abs/2201.03454v3
- Date: Fri, 13 Oct 2023 07:48:24 GMT
- Title: 3D Face Morphing Attacks: Generation, Vulnerability and Detection
- Authors: Jag Mohan Singh, Raghavendra Ramachandra
- Abstract summary: Face Recognition systems have been found to be vulnerable to morphing attacks.
This work presents a novel direction for generating face-morphing attacks in 3D.
- Score: 3.700129710233692
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Face Recognition systems (FRS) have been found to be vulnerable to morphing
attacks, where the morphed face image is generated by blending the face images
from contributory data subjects. This work presents a novel direction for
generating face-morphing attacks in 3D. To this extent, we introduced a novel
approach based on blending 3D face point clouds corresponding to contributory
data subjects. The proposed method generates 3D face morphing by projecting the
input 3D face point clouds onto depth maps and 2D color images, followed by
image blending and wrapping operations performed independently on the color
images and depth maps. We then back-projected the 2D morphing color map and the
depth map to the point cloud using the canonical (fixed) view. Given that the
generated 3D face morphing models will result in holes owing to a single
canonical view, we have proposed a new algorithm for hole filling that will
result in a high-quality 3D face morphing model. Extensive experiments were
conducted on the newly generated 3D face dataset comprising 675 3D scans
corresponding to 41 unique data subjects and a publicly available database
(Facescape) with 100 data subjects. Experiments were performed to benchmark the
vulnerability of the {proposed 3D morph-generation scheme against} automatic
2D, 3D FRS, and human observer analysis. We also presented a quantitative
assessment of the quality of the generated 3D face-morphing models using eight
different quality metrics. Finally, we propose three different 3D face Morphing
Attack Detection (3D-MAD) algorithms to benchmark the performance of 3D face
morphing attack detection techniques.
Related papers
- 3D-GANTex: 3D Face Reconstruction with StyleGAN3-based Multi-View Images and 3DDFA based Mesh Generation [0.8479659578608233]
This paper introduces a novel method for texture estimation from a single image by first using StyleGAN and 3D Morphable Models.
The result shows that the generated mesh is of high quality with near to accurate texture representation.
arXiv Detail & Related papers (2024-10-21T13:42:06Z) - FaceGPT: Self-supervised Learning to Chat about 3D Human Faces [69.4651241319356]
We introduce FaceGPT, a self-supervised learning framework for Large Vision-Language Models (VLMs) to reason about 3D human faces from images and text.
FaceGPT overcomes this limitation by embedding the parameters of a 3D morphable face model (3DMM) into the token space of a VLM.
We show that FaceGPT achieves high-quality 3D face reconstructions and retains the ability for general-purpose visual instruction following.
arXiv Detail & Related papers (2024-06-11T11:13:29Z) - ID-to-3D: Expressive ID-guided 3D Heads via Score Distillation Sampling [96.87575334960258]
ID-to-3D is a method to generate identity- and text-guided 3D human heads with disentangled expressions.
Results achieve an unprecedented level of identity-consistent and high-quality texture and geometry generation.
arXiv Detail & Related papers (2024-05-26T13:36:45Z) - Single-Shot Implicit Morphable Faces with Consistent Texture
Parameterization [91.52882218901627]
We propose a novel method for constructing implicit 3D morphable face models that are both generalizable and intuitive for editing.
Our method improves upon photo-realism, geometry, and expression accuracy compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T17:58:40Z) - Generating 2D and 3D Master Faces for Dictionary Attacks with a
Network-Assisted Latent Space Evolution [68.8204255655161]
A master face is a face image that passes face-based identity authentication for a high percentage of the population.
We optimize these faces for 2D and 3D face verification models.
In 3D, we generate faces using the 2D StyleGAN2 generator and predict a 3D structure using a deep 3D face reconstruction network.
arXiv Detail & Related papers (2022-11-25T09:15:38Z) - Beyond 3DMM: Learning to Capture High-fidelity 3D Face Shape [77.95154911528365]
3D Morphable Model (3DMM) fitting has widely benefited face analysis due to its strong 3D priori.
Previous reconstructed 3D faces suffer from degraded visual verisimilitude due to the loss of fine-grained geometry.
This paper proposes a complete solution to capture the personalized shape so that the reconstructed shape looks identical to the corresponding person.
arXiv Detail & Related papers (2022-04-09T03:46:18Z) - FaceScape: 3D Facial Dataset and Benchmark for Single-View 3D Face
Reconstruction [29.920622006999732]
We present a large-scale detailed 3D face dataset, FaceScape, and the corresponding benchmark to evaluate single-view facial 3D reconstruction.
By training on FaceScape data, a novel algorithm is proposed to predict elaborate riggable 3D face models from a single image input.
We also use FaceScape data to generate the in-the-wild and in-the-lab benchmark to evaluate recent methods of single-view face reconstruction.
arXiv Detail & Related papers (2021-11-01T16:48:34Z) - OSTeC: One-Shot Texture Completion [86.23018402732748]
We propose an unsupervised approach for one-shot 3D facial texture completion.
The proposed approach rotates an input image in 3D and fill-in the unseen regions by reconstructing the rotated image in a 2D face generator.
We frontalize the target image by projecting the completed texture into the generator.
arXiv Detail & Related papers (2020-12-30T23:53:26Z) - Multi-channel Deep 3D Face Recognition [4.726009758066045]
The accuracy of 2D face recognition is still challenged by the change of pose, illumination, make-up, and expression.
We propose a multi-Channel deep 3D face network for face recognition based on 3D face data.
The face recognition accuracy of the multi-Channel deep 3D face network has achieved 98.6.
arXiv Detail & Related papers (2020-09-30T15:29:05Z) - FaceScape: a Large-scale High Quality 3D Face Dataset and Detailed
Riggable 3D Face Prediction [39.95272819738226]
We present a novel algorithm that is able to predict elaborate riggable 3D face models from a single image input.
FaceScape dataset provides 18,760 textured 3D faces, captured from 938 subjects and each with 20 specific expressions.
arXiv Detail & Related papers (2020-03-31T07:11:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.