dc-GAN: Dual-Conditioned GAN for Face Demorphing From a Single Morph
- URL: http://arxiv.org/abs/2411.14494v1
- Date: Wed, 20 Nov 2024 19:24:30 GMT
- Title: dc-GAN: Dual-Conditioned GAN for Face Demorphing From a Single Morph
- Authors: Nitish Shukla, Arun Ross,
- Abstract summary: We propose dc-GAN, a novel GAN-based demorphing method conditioned on the morph images.
Our method overcomes morph-replication and produces high quality reconstructions of the bonafide images used to create the morphs.
- Score: 10.902536447343465
- License:
- Abstract: A facial morph is an image created by combining two face images pertaining to two distinct identities. Face demorphing inverts the process and tries to recover the original images constituting a facial morph. While morph attack detection (MAD) techniques can be used to flag morph images, they do not divulge any visual information about the faces used to create them. Demorphing helps address this problem. Existing demorphing techniques are either very restrictive (assume identities during testing) or produce feeble outputs (both outputs look very similar). In this paper, we overcome these issues by proposing dc-GAN, a novel GAN-based demorphing method conditioned on the morph images. Our method overcomes morph-replication and produces high quality reconstructions of the bonafide images used to create the morphs. Moreover, our method is highly generalizable across demorphing paradigms (differential/reference-free). We conduct experiments on AMSL, FRLL-Morphs and MorDiff datasets to showcase the efficacy of our method.
Related papers
- LADIMO: Face Morph Generation through Biometric Template Inversion with Latent Diffusion [5.602947425285195]
Face morphing attacks pose a severe security threat to face recognition systems.
We present a representation-level face morphing approach, namely LADIMO, that performs morphing on two face recognition embeddings.
We show that each face morph variant has an individual attack success rate, enabling us to maximize the morph attack potential.
arXiv Detail & Related papers (2024-10-10T14:41:37Z) - Facial Demorphing via Identity Preserving Image Decomposition [10.902536447343465]
morph attack detection techniques do not extract information about the underlying bonafides used to create them.
We propose a novel method that is reference-free and recovers the bonafides with high accuracy.
Our method is observed to reconstruct high-quality bonafides in terms of definition and fidelity.
arXiv Detail & Related papers (2024-08-20T16:42:11Z) - Arc2Face: A Foundation Model for ID-Consistent Human Faces [95.00331107591859]
Arc2Face is an identity-conditioned face foundation model.
It can generate diverse photo-realistic images with an unparalleled degree of face similarity than existing models.
arXiv Detail & Related papers (2024-03-18T10:32:51Z) - SDeMorph: Towards Better Facial De-morphing from Single Morph [0.0]
Face Recognition Systems (FRS) are vulnerable to morph attacks.
Current Morph Attack Detection (MAD) can detect the morph but are unable to recover the identities used to create the morph.
We propose SDeMorph, a novel de-morphing method that is reference-free and recovers the identities of bona fides.
arXiv Detail & Related papers (2023-08-22T13:46:12Z) - Single-Shot Implicit Morphable Faces with Consistent Texture
Parameterization [91.52882218901627]
We propose a novel method for constructing implicit 3D morphable face models that are both generalizable and intuitive for editing.
Our method improves upon photo-realism, geometry, and expression accuracy compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T17:58:40Z) - MorphGANFormer: Transformer-based Face Morphing and De-Morphing [55.211984079735196]
StyleGAN-based approaches to face morphing are among the leading techniques.
We propose a transformer-based alternative to face morphing and demonstrate its superiority to StyleGAN-based methods.
arXiv Detail & Related papers (2023-02-18T19:09:11Z) - Facial De-morphing: Extracting Component Faces from a Single Morph [12.346914707006773]
morph attack detection strategies can detect morphs but cannot recover the images or identities used in creating them.
We propose a novel de-morphing method that can recover images of both identities simultaneously from a single morphed face image.
arXiv Detail & Related papers (2022-09-07T05:01:02Z) - FT-TDR: Frequency-guided Transformer and Top-Down Refinement Network for
Blind Face Inpainting [77.78305705925376]
Blind face inpainting refers to the task of reconstructing visual contents without explicitly indicating the corrupted regions in a face image.
We propose a novel two-stage blind face inpainting method named Frequency-guided Transformer and Top-Down Refinement Network (FT-TDR) to tackle these challenges.
arXiv Detail & Related papers (2021-08-10T03:12:01Z) - Disentangled Lifespan Face Synthesis [100.29058545878341]
A lifespan face synthesis (LFS) model aims to generate a set of photo-realistic face images of a person's whole life, given only one snapshot as reference.
The generated face image given a target age code is expected to be age-sensitive reflected by bio-plausible transformations of shape and texture.
This is achieved by extracting shape, texture and identity features separately from an encoder.
arXiv Detail & Related papers (2021-08-05T22:33:14Z) - Can GAN Generated Morphs Threaten Face Recognition Systems Equally as
Landmark Based Morphs? -- Vulnerability and Detection [22.220940043294334]
We propose a new framework for generating face morphs using a newer Generative Adversarial Network (GAN) - StyleGAN.
With the newly created morphing dataset of 2500 morphed face images, we pose a critical question in this work.
arXiv Detail & Related papers (2020-07-07T16:52:56Z) - Fine-grained Image-to-Image Transformation towards Visual Recognition [102.51124181873101]
We aim at transforming an image with a fine-grained category to synthesize new images that preserve the identity of the input image.
We adopt a model based on generative adversarial networks to disentangle the identity related and unrelated factors of an image.
Experiments on the CompCars and Multi-PIE datasets demonstrate that our model preserves the identity of the generated images much better than the state-of-the-art image-to-image transformation models.
arXiv Detail & Related papers (2020-01-12T05:26:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.