MorphGANFormer: Transformer-based Face Morphing and De-Morphing
- URL: http://arxiv.org/abs/2302.09404v1
- Date: Sat, 18 Feb 2023 19:09:11 GMT
- Title: MorphGANFormer: Transformer-based Face Morphing and De-Morphing
- Authors: Na Zhang, Xudong Liu, Xin Li, Guo-Jun Qi
- Abstract summary: StyleGAN-based approaches to face morphing are among the leading techniques.
We propose a transformer-based alternative to face morphing and demonstrate its superiority to StyleGAN-based methods.
- Score: 55.211984079735196
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semantic face image manipulation has received increasing attention in recent
years. StyleGAN-based approaches to face morphing are among the leading
techniques; however, they often suffer from noticeable blurring and artifacts
as a result of the uniform attention in the latent feature space. In this
paper, we propose to develop a transformer-based alternative to face morphing
and demonstrate its superiority to StyleGAN-based methods. Our contributions
are threefold. First, inspired by GANformer, we introduce a bipartite structure
to exploit long-range interactions in face images for iterative propagation of
information from latent variables to salient facial features. Special loss
functions are designed to support the optimization of face morphing. Second, we
extend the study of transformer-based face morphing to demorphing by presenting
an effective defense strategy with access to a reference image using the same
generator of MorphGANFormer. Such demorphing is conceptually similar to
unmixing of hyperspectral images but operates in the latent (instead of pixel)
space. Third, for the first time, we address a fundamental issue of
vulnerability-detectability trade-off for face morphing studies. It is argued
that neither doppelganger norrandom pair selection is optimal, and a Lagrangian
multiplier-based approach should be used to achieve an improved trade-off
between recognition vulnerability and attack detectability.
Related papers
- LADIMO: Face Morph Generation through Biometric Template Inversion with Latent Diffusion [5.602947425285195]
Face morphing attacks pose a severe security threat to face recognition systems.
We present a representation-level face morphing approach, namely LADIMO, that performs morphing on two face recognition embeddings.
We show that each face morph variant has an individual attack success rate, enabling us to maximize the morph attack potential.
arXiv Detail & Related papers (2024-10-10T14:41:37Z) - Approximating Optimal Morphing Attacks using Template Inversion [4.0361765428523135]
We develop a novel type ofdeep morphing attack based on inverting a theoretical optimal morph embedding.
We generate morphing attacks from several source datasets and study the effectiveness of those attacks against several face recognition networks.
arXiv Detail & Related papers (2024-02-01T15:51:46Z) - Optimal-Landmark-Guided Image Blending for Face Morphing Attacks [8.024953195407502]
We propose a novel approach for conducting face morphing attacks, which utilizes optimal-landmark-guided image blending.
Our proposed method overcomes the limitations of previous approaches by optimizing the morphing landmarks and using Graph Convolutional Networks (GCNs) to combine landmark and appearance features.
arXiv Detail & Related papers (2024-01-30T03:45:06Z) - Effective Adapter for Face Recognition in the Wild [72.75516495170199]
We tackle the challenge of face recognition in the wild, where images often suffer from low quality and real-world distortions.
Traditional approaches-either training models directly on degraded images or their enhanced counterparts using face restoration techniques-have proven ineffective.
We propose an effective adapter for augmenting existing face recognition models trained on high-quality facial datasets.
arXiv Detail & Related papers (2023-12-04T08:55:46Z) - Face Transformer: Towards High Fidelity and Accurate Face Swapping [54.737909435708936]
Face swapping aims to generate swapped images that fuse the identity of source faces and the attributes of target faces.
This paper presents Face Transformer, a novel face swapping network that can accurately preserve source identities and target attributes simultaneously.
arXiv Detail & Related papers (2023-04-05T15:51:44Z) - Landmark Enforcement and Style Manipulation for Generative Morphing [24.428843425522107]
We propose a novel StyleGAN morph generation technique by introducing a landmark enforcement method to resolve this issue.
Exploration of the latent space of our model is conducted using Principal Component Analysis (PCA) to accentuate the effect of both the bona fide faces on the morphed latent representation.
To improve high frequency reconstruction in the morphs, we study the train-ability of the noise input for the StyleGAN2 model.
arXiv Detail & Related papers (2022-10-18T22:10:25Z) - UnGANable: Defending Against GAN-based Face Manipulation [69.90981797810348]
Deepfakes pose severe threats of visual misinformation to our society.
One representative deepfake application is face manipulation that modifies a victim's facial attributes in an image.
We propose the first defense system, namely UnGANable, against GAN-inversion-based face manipulation.
arXiv Detail & Related papers (2022-10-03T14:20:01Z) - FaceTuneGAN: Face Autoencoder for Convolutional Expression Transfer
Using Neural Generative Adversarial Networks [0.7043489166804575]
We present FaceTuneGAN, a new 3D face model representation decomposing and encoding separately facial identity and facial expression.
We propose a first adaptation of image-to-image translation networks, that have successfully been used in the 2D domain, to 3D face geometry.
arXiv Detail & Related papers (2021-12-01T14:42:03Z) - Heterogeneous Face Frontalization via Domain Agnostic Learning [74.86585699909459]
We propose a domain agnostic learning-based generative adversarial network (DAL-GAN) which can synthesize frontal views in the visible domain from thermal faces with pose variations.
DAL-GAN consists of a generator with an auxiliary classifier and two discriminators which capture both local and global texture discriminations for better synthesis.
arXiv Detail & Related papers (2021-07-17T20:41:41Z) - Fine-grained Image-to-Image Transformation towards Visual Recognition [102.51124181873101]
We aim at transforming an image with a fine-grained category to synthesize new images that preserve the identity of the input image.
We adopt a model based on generative adversarial networks to disentangle the identity related and unrelated factors of an image.
Experiments on the CompCars and Multi-PIE datasets demonstrate that our model preserves the identity of the generated images much better than the state-of-the-art image-to-image transformation models.
arXiv Detail & Related papers (2020-01-12T05:26:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.