Optimal-Landmark-Guided Image Blending for Face Morphing Attacks
- URL: http://arxiv.org/abs/2401.16722v1
- Date: Tue, 30 Jan 2024 03:45:06 GMT
- Title: Optimal-Landmark-Guided Image Blending for Face Morphing Attacks
- Authors: Qiaoyun He, Zongyong Deng, Zuyuan He, Qijun Zhao
- Abstract summary: We propose a novel approach for conducting face morphing attacks, which utilizes optimal-landmark-guided image blending.
Our proposed method overcomes the limitations of previous approaches by optimizing the morphing landmarks and using Graph Convolutional Networks (GCNs) to combine landmark and appearance features.
- Score: 8.024953195407502
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a novel approach for conducting face morphing
attacks, which utilizes optimal-landmark-guided image blending. Current face
morphing attacks can be categorized into landmark-based and generation-based
approaches. Landmark-based methods use geometric transformations to warp facial
regions according to averaged landmarks but often produce morphed images with
poor visual quality. Generation-based methods, which employ generation models
to blend multiple face images, can achieve better visual quality but are often
unsuccessful in generating morphed images that can effectively evade
state-of-the-art face recognition systems~(FRSs). Our proposed method overcomes
the limitations of previous approaches by optimizing the morphing landmarks and
using Graph Convolutional Networks (GCNs) to combine landmark and appearance
features. We model facial landmarks as nodes in a bipartite graph that is fully
connected and utilize GCNs to simulate their spatial and structural
relationships. The aim is to capture variations in facial shape and enable
accurate manipulation of facial appearance features during the warping process,
resulting in morphed facial images that are highly realistic and visually
faithful. Experiments on two public datasets prove that our method inherits the
advantages of previous landmark-based and generation-based methods and
generates morphed images with higher quality, posing a more significant threat
to state-of-the-art FRSs.
Related papers
- Effective Adapter for Face Recognition in the Wild [72.75516495170199]
We tackle the challenge of face recognition in the wild, where images often suffer from low quality and real-world distortions.
Traditional approaches-either training models directly on degraded images or their enhanced counterparts using face restoration techniques-have proven ineffective.
We propose an effective adapter for augmenting existing face recognition models trained on high-quality facial datasets.
arXiv Detail & Related papers (2023-12-04T08:55:46Z) - MorphGANFormer: Transformer-based Face Morphing and De-Morphing [55.211984079735196]
StyleGAN-based approaches to face morphing are among the leading techniques.
We propose a transformer-based alternative to face morphing and demonstrate its superiority to StyleGAN-based methods.
arXiv Detail & Related papers (2023-02-18T19:09:11Z) - Landmark Enforcement and Style Manipulation for Generative Morphing [24.428843425522107]
We propose a novel StyleGAN morph generation technique by introducing a landmark enforcement method to resolve this issue.
Exploration of the latent space of our model is conducted using Principal Component Analysis (PCA) to accentuate the effect of both the bona fide faces on the morphed latent representation.
To improve high frequency reconstruction in the morphs, we study the train-ability of the noise input for the StyleGAN2 model.
arXiv Detail & Related papers (2022-10-18T22:10:25Z) - GMFIM: A Generative Mask-guided Facial Image Manipulation Model for
Privacy Preservation [0.7734726150561088]
We propose a Generative Mask-guided Face Image Manipulation model based on GANs to apply imperceptible editing to the input face image.
Our model can achieve better performance against automated face recognition systems in comparison to the state-of-the-art methods.
arXiv Detail & Related papers (2022-01-10T14:09:14Z) - FaceTuneGAN: Face Autoencoder for Convolutional Expression Transfer
Using Neural Generative Adversarial Networks [0.7043489166804575]
We present FaceTuneGAN, a new 3D face model representation decomposing and encoding separately facial identity and facial expression.
We propose a first adaptation of image-to-image translation networks, that have successfully been used in the 2D domain, to 3D face geometry.
arXiv Detail & Related papers (2021-12-01T14:42:03Z) - Inverting Generative Adversarial Renderer for Face Reconstruction [58.45125455811038]
In this work, we introduce a novel Generative Adversa Renderer (GAR)
GAR learns to model the complicated real-world image, instead of relying on the graphics rules, it is capable of producing realistic images.
Our method achieves state-of-the-art performances on multiple face reconstruction.
arXiv Detail & Related papers (2021-05-06T04:16:06Z) - MIPGAN -- Generating Strong and High Quality Morphing Attacks Using
Identity Prior Driven GAN [22.220940043294334]
We present a new approach for generating strong attacks using an Identity Prior Driven Generative Adversarial Network.
The proposed MIPGAN is derived from the StyleGAN with a newly formulated loss function exploiting perceptual quality and identity factor.
We demonstrate the proposed approach's applicability to generate strong morphing attacks by evaluating its vulnerability against both commercial and deep learning based Face Recognition System.
arXiv Detail & Related papers (2020-09-03T15:08:38Z) - Generating Person Images with Appearance-aware Pose Stylizer [66.44220388377596]
We present a novel end-to-end framework to generate realistic person images based on given person poses and appearances.
The core of our framework is a novel generator called Appearance-aware Pose Stylizer (APS) which generates human images by coupling the target pose with the conditioned person appearance progressively.
arXiv Detail & Related papers (2020-07-17T15:58:05Z) - DotFAN: A Domain-transferred Face Augmentation Network for Pose and
Illumination Invariant Face Recognition [94.96686189033869]
We propose a 3D model-assisted domain-transferred face augmentation network (DotFAN)
DotFAN can generate a series of variants of an input face based on the knowledge distilled from existing rich face datasets collected from other domains.
Experiments show that DotFAN is beneficial for augmenting small face datasets to improve their within-class diversity.
arXiv Detail & Related papers (2020-02-23T08:16:34Z) - Fine-grained Image-to-Image Transformation towards Visual Recognition [102.51124181873101]
We aim at transforming an image with a fine-grained category to synthesize new images that preserve the identity of the input image.
We adopt a model based on generative adversarial networks to disentangle the identity related and unrelated factors of an image.
Experiments on the CompCars and Multi-PIE datasets demonstrate that our model preserves the identity of the generated images much better than the state-of-the-art image-to-image transformation models.
arXiv Detail & Related papers (2020-01-12T05:26:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.