diffDemorph: Extending Reference-Free Demorphing to Unseen Faces
- URL: http://arxiv.org/abs/2505.14527v3
- Date: Fri, 06 Jun 2025 13:25:33 GMT
- Title: diffDemorph: Extending Reference-Free Demorphing to Unseen Faces
- Authors: Nitish Shukla, Arun Ross,
- Abstract summary: diffDeMorph effectively disentangles component images from a composite morph image with high visual fidelity.<n>We train our method on morphs created using synthetically generated face images and test on real morphs, thereby enhancing the practicality of the technique.
- Score: 10.902536447343465
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A face morph is created by combining two face images corresponding to two identities to produce a composite that successfully matches both the constituent identities. Reference-free (RF) demorphing reverses this process using only the morph image, without the need for additional reference images. Previous RF demorphing methods are overly constrained, as they rely on assumptions about the distributions of training and testing morphs such as the morphing technique used (e.g., landmark-based) and face image style (e.g., passport photos). In this paper, we introduce a novel diffusion-based approach, referred to as diffDeMorph, that effectively disentangles component images from a composite morph image with high visual fidelity. Our method is the first to generalize across morph techniques and face styles, beating the current state of the art by $\geq 59.46\%$ under a common training protocol across all datasets tested. We train our method on morphs created using synthetically generated face images and test on real morphs, thereby enhancing the practicality of the technique. Experiments on six datasets and two face matchers establish the utility and efficacy of our method.
Related papers
- Facial Demorphing from a Single Morph Using a Latent Conditional GAN [10.902536447343465]
The proposed method decomposes a morph in latent space allowing it to demorph images created from unseen morph techniques and face styles.<n>We train our method on morphs created from synthetic faces and test on morphs created from real faces using different morph techniques.
arXiv Detail & Related papers (2025-07-24T16:41:47Z) - dc-GAN: Dual-Conditioned GAN for Face Demorphing From a Single Morph [10.902536447343465]
We propose dc-GAN, a novel GAN-based demorphing method conditioned on the morph images.<n>Our method overcomes morph-replication and produces high quality reconstructions of the bonafide images used to create the morphs.
arXiv Detail & Related papers (2024-11-20T19:24:30Z) - Facial Demorphing via Identity Preserving Image Decomposition [10.902536447343465]
morph attack detection techniques do not extract information about the underlying bonafides used to create them.
We propose a novel method that is reference-free and recovers the bonafides with high accuracy.
Our method is observed to reconstruct high-quality bonafides in terms of definition and fidelity.
arXiv Detail & Related papers (2024-08-20T16:42:11Z) - Improving Diffusion Models for Authentic Virtual Try-on in the Wild [53.96244595495942]
This paper considers image-based virtual try-on, which renders an image of a person wearing a curated garment.
We propose a novel diffusion model that improves garment fidelity and generates authentic virtual try-on images.
We present a customization method using a pair of person-garment images, which significantly improves fidelity and authenticity.
arXiv Detail & Related papers (2024-03-08T08:12:18Z) - Effective Adapter for Face Recognition in the Wild [72.75516495170199]
We tackle the challenge of face recognition in the wild, where images often suffer from low quality and real-world distortions.
Traditional approaches-either training models directly on degraded images or their enhanced counterparts using face restoration techniques-have proven ineffective.
We propose an effective adapter for augmenting existing face recognition models trained on high-quality facial datasets.
arXiv Detail & Related papers (2023-12-04T08:55:46Z) - MorphGANFormer: Transformer-based Face Morphing and De-Morphing [55.211984079735196]
StyleGAN-based approaches to face morphing are among the leading techniques.
We propose a transformer-based alternative to face morphing and demonstrate its superiority to StyleGAN-based methods.
arXiv Detail & Related papers (2023-02-18T19:09:11Z) - Facial De-morphing: Extracting Component Faces from a Single Morph [12.346914707006773]
morph attack detection strategies can detect morphs but cannot recover the images or identities used in creating them.
We propose a novel de-morphing method that can recover images of both identities simultaneously from a single morphed face image.
arXiv Detail & Related papers (2022-09-07T05:01:02Z) - Disentangled Lifespan Face Synthesis [100.29058545878341]
A lifespan face synthesis (LFS) model aims to generate a set of photo-realistic face images of a person's whole life, given only one snapshot as reference.
The generated face image given a target age code is expected to be age-sensitive reflected by bio-plausible transformations of shape and texture.
This is achieved by extracting shape, texture and identity features separately from an encoder.
arXiv Detail & Related papers (2021-08-05T22:33:14Z) - Image Morphing with Perceptual Constraints and STN Alignment [70.38273150435928]
We propose a conditional GAN morphing framework operating on a pair of input images.
A special training protocol produces sequences of frames, combined with a perceptual similarity loss, promote smooth transformation over time.
We provide comparisons to classic as well as latent space morphing techniques, and demonstrate that, given a set of images for self-supervision, our network learns to generate visually pleasing morphing effects.
arXiv Detail & Related papers (2020-04-29T10:49:10Z) - Fine-grained Image-to-Image Transformation towards Visual Recognition [102.51124181873101]
We aim at transforming an image with a fine-grained category to synthesize new images that preserve the identity of the input image.
We adopt a model based on generative adversarial networks to disentangle the identity related and unrelated factors of an image.
Experiments on the CompCars and Multi-PIE datasets demonstrate that our model preserves the identity of the generated images much better than the state-of-the-art image-to-image transformation models.
arXiv Detail & Related papers (2020-01-12T05:26:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.