GMFIM: A Generative Mask-guided Facial Image Manipulation Model for
Privacy Preservation
- URL: http://arxiv.org/abs/2201.03353v1
- Date: Mon, 10 Jan 2022 14:09:14 GMT
- Title: GMFIM: A Generative Mask-guided Facial Image Manipulation Model for
Privacy Preservation
- Authors: Mohammad Hossein Khojaste, Nastaran Moradzadeh Farid, Ahmad Nickabadi
- Abstract summary: We propose a Generative Mask-guided Face Image Manipulation model based on GANs to apply imperceptible editing to the input face image.
Our model can achieve better performance against automated face recognition systems in comparison to the state-of-the-art methods.
- Score: 0.7734726150561088
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The use of social media websites and applications has become very popular and
people share their photos on these networks. Automatic recognition and tagging
of people's photos on these networks has raised privacy preservation issues and
users seek methods for hiding their identities from these algorithms.
Generative adversarial networks (GANs) are shown to be very powerful in
generating face images in high diversity and also in editing face images. In
this paper, we propose a Generative Mask-guided Face Image Manipulation (GMFIM)
model based on GANs to apply imperceptible editing to the input face image to
preserve the privacy of the person in the image. Our model consists of three
main components: a) the face mask module to cut the face area out of the input
image and omit the background, b) the GAN-based optimization module for
manipulating the face image and hiding the identity and, c) the merge module
for combining the background of the input image and the manipulated
de-identified face image. Different criteria are considered in the loss
function of the optimization step to produce high-quality images that are as
similar as possible to the input image while they cannot be recognized by AFR
systems. The results of the experiments on different datasets show that our
model can achieve better performance against automated face recognition systems
in comparison to the state-of-the-art methods and it catches a higher attack
success rate in most experiments from a total of 18. Moreover, the generated
images of our proposed model have the highest quality and are more pleasing to
human eyes.
Related papers
- OSDFace: One-Step Diffusion Model for Face Restoration [72.5045389847792]
Diffusion models have demonstrated impressive performance in face restoration.
We propose OSDFace, a novel one-step diffusion model for face restoration.
Results demonstrate that OSDFace surpasses current state-of-the-art (SOTA) methods in both visual quality and quantitative metrics.
arXiv Detail & Related papers (2024-11-26T07:07:48Z) - Optimal-Landmark-Guided Image Blending for Face Morphing Attacks [8.024953195407502]
We propose a novel approach for conducting face morphing attacks, which utilizes optimal-landmark-guided image blending.
Our proposed method overcomes the limitations of previous approaches by optimizing the morphing landmarks and using Graph Convolutional Networks (GCNs) to combine landmark and appearance features.
arXiv Detail & Related papers (2024-01-30T03:45:06Z) - Attribute-Guided Encryption with Facial Texture Masking [64.77548539959501]
We propose Attribute Guided Encryption with Facial Texture Masking to protect users from unauthorized facial recognition systems.
Our proposed method produces more natural-looking encrypted images than state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T23:50:43Z) - HiMFR: A Hybrid Masked Face Recognition Through Face Inpainting [0.7868449549351486]
We propose an end-to-end hybrid masked face recognition system, namely HiMFR.
Masked face detector module applies a pretrained Vision Transformer to detect whether faces are covered with masked or not.
Inpainting module uses a fine-tune image inpainting model based on a Generative Adversarial Network (GAN) to restore faces.
Finally, the hybrid face recognition module based on ViT with an EfficientNetB3 backbone recognizes the faces.
arXiv Detail & Related papers (2022-09-19T11:26:49Z) - Network Architecture Search for Face Enhancement [82.25775020564654]
We present a multi-task face restoration network, called Network Architecture Search for Face Enhancement (NASFE)
NASFE can enhance poor quality face images containing a single degradation (i.e. noise or blur) or multiple degradations (noise+blur+low-light)
arXiv Detail & Related papers (2021-05-13T19:46:05Z) - Joint Face Image Restoration and Frontalization for Recognition [79.78729632975744]
In real-world scenarios, many factors may harm face recognition performance, e.g., large pose, bad illumination,low resolution, blur and noise.
Previous efforts usually first restore the low-quality faces to high-quality ones and then perform face recognition.
We propose an Multi-Degradation Face Restoration model to restore frontalized high-quality faces from the given low-quality ones.
arXiv Detail & Related papers (2021-05-12T03:52:41Z) - MIPGAN -- Generating Strong and High Quality Morphing Attacks Using
Identity Prior Driven GAN [22.220940043294334]
We present a new approach for generating strong attacks using an Identity Prior Driven Generative Adversarial Network.
The proposed MIPGAN is derived from the StyleGAN with a newly formulated loss function exploiting perceptual quality and identity factor.
We demonstrate the proposed approach's applicability to generate strong morphing attacks by evaluating its vulnerability against both commercial and deep learning based Face Recognition System.
arXiv Detail & Related papers (2020-09-03T15:08:38Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z) - DotFAN: A Domain-transferred Face Augmentation Network for Pose and
Illumination Invariant Face Recognition [94.96686189033869]
We propose a 3D model-assisted domain-transferred face augmentation network (DotFAN)
DotFAN can generate a series of variants of an input face based on the knowledge distilled from existing rich face datasets collected from other domains.
Experiments show that DotFAN is beneficial for augmenting small face datasets to improve their within-class diversity.
arXiv Detail & Related papers (2020-02-23T08:16:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.