Facial De-morphing: Extracting Component Faces from a Single Morph
- URL: http://arxiv.org/abs/2209.02933v1
- Date: Wed, 7 Sep 2022 05:01:02 GMT
- Title: Facial De-morphing: Extracting Component Faces from a Single Morph
- Authors: Sudipta Banerjee and Prateek Jaiswal and Arun Ross
- Abstract summary: morph attack detection strategies can detect morphs but cannot recover the images or identities used in creating them.
We propose a novel de-morphing method that can recover images of both identities simultaneously from a single morphed face image.
- Score: 12.346914707006773
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A face morph is created by strategically combining two or more face images
corresponding to multiple identities. The intention is for the morphed image to
match with multiple identities. Current morph attack detection strategies can
detect morphs but cannot recover the images or identities used in creating
them. The task of deducing the individual face images from a morphed face image
is known as \textit{de-morphing}. Existing work in de-morphing assume the
availability of a reference image pertaining to one identity in order to
recover the image of the accomplice - i.e., the other identity. In this work,
we propose a novel de-morphing method that can recover images of both
identities simultaneously from a single morphed face image without needing a
reference image or prior information about the morphing process. We propose a
generative adversarial network that achieves single image-based de-morphing
with a surprisingly high degree of visual realism and biometric similarity with
the original face images. We demonstrate the performance of our method on
landmark-based morphs and generative model-based morphs with promising results.
Related papers
- LADIMO: Face Morph Generation through Biometric Template Inversion with Latent Diffusion [5.602947425285195]
Face morphing attacks pose a severe security threat to face recognition systems.
We present a representation-level face morphing approach, namely LADIMO, that performs morphing on two face recognition embeddings.
We show that each face morph variant has an individual attack success rate, enabling us to maximize the morph attack potential.
arXiv Detail & Related papers (2024-10-10T14:41:37Z) - Facial Demorphing via Identity Preserving Image Decomposition [10.902536447343465]
morph attack detection techniques do not extract information about the underlying bonafides used to create them.
We propose a novel method that is reference-free and recovers the bonafides with high accuracy.
Our method is observed to reconstruct high-quality bonafides in terms of definition and fidelity.
arXiv Detail & Related papers (2024-08-20T16:42:11Z) - FlashFace: Human Image Personalization with High-fidelity Identity Preservation [59.76645602354481]
FlashFace allows users to easily personalize their own photos by providing one or a few reference face images and a text prompt.
Our approach is distinguishable from existing human photo customization methods by higher-fidelity identity preservation and better instruction following.
arXiv Detail & Related papers (2024-03-25T17:59:57Z) - Arc2Face: A Foundation Model for ID-Consistent Human Faces [95.00331107591859]
Arc2Face is an identity-conditioned face foundation model.
It can generate diverse photo-realistic images with an unparalleled degree of face similarity than existing models.
arXiv Detail & Related papers (2024-03-18T10:32:51Z) - SDeMorph: Towards Better Facial De-morphing from Single Morph [0.0]
Face Recognition Systems (FRS) are vulnerable to morph attacks.
Current Morph Attack Detection (MAD) can detect the morph but are unable to recover the identities used to create the morph.
We propose SDeMorph, a novel de-morphing method that is reference-free and recovers the identities of bona fides.
arXiv Detail & Related papers (2023-08-22T13:46:12Z) - DreamIdentity: Improved Editability for Efficient Face-identity
Preserved Image Generation [69.16517915592063]
We propose a novel face-identity encoder to learn an accurate representation of human faces.
We also propose self-augmented editability learning to enhance the editability of models.
Our methods can generate identity-preserved images under different scenes at a much faster speed.
arXiv Detail & Related papers (2023-07-01T11:01:17Z) - MorphGANFormer: Transformer-based Face Morphing and De-Morphing [55.211984079735196]
StyleGAN-based approaches to face morphing are among the leading techniques.
We propose a transformer-based alternative to face morphing and demonstrate its superiority to StyleGAN-based methods.
arXiv Detail & Related papers (2023-02-18T19:09:11Z) - Can GAN Generated Morphs Threaten Face Recognition Systems Equally as
Landmark Based Morphs? -- Vulnerability and Detection [22.220940043294334]
We propose a new framework for generating face morphs using a newer Generative Adversarial Network (GAN) - StyleGAN.
With the newly created morphing dataset of 2500 morphed face images, we pose a critical question in this work.
arXiv Detail & Related papers (2020-07-07T16:52:56Z) - Style Your Face Morph and Improve Your Face Morphing Attack Detector [2.0883760606514934]
A morphed face image is a synthetically created image that looks so similar to the faces of two subjects that both can use it for verification.
We propose a style transfer based method that improves the quality of morphed face images.
arXiv Detail & Related papers (2020-04-23T19:29:07Z) - DotFAN: A Domain-transferred Face Augmentation Network for Pose and
Illumination Invariant Face Recognition [94.96686189033869]
We propose a 3D model-assisted domain-transferred face augmentation network (DotFAN)
DotFAN can generate a series of variants of an input face based on the knowledge distilled from existing rich face datasets collected from other domains.
Experiments show that DotFAN is beneficial for augmenting small face datasets to improve their within-class diversity.
arXiv Detail & Related papers (2020-02-23T08:16:34Z) - Fine-grained Image-to-Image Transformation towards Visual Recognition [102.51124181873101]
We aim at transforming an image with a fine-grained category to synthesize new images that preserve the identity of the input image.
We adopt a model based on generative adversarial networks to disentangle the identity related and unrelated factors of an image.
Experiments on the CompCars and Multi-PIE datasets demonstrate that our model preserves the identity of the generated images much better than the state-of-the-art image-to-image transformation models.
arXiv Detail & Related papers (2020-01-12T05:26:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.