UnGANable: Defending Against GAN-based Face Manipulation
- URL: http://arxiv.org/abs/2210.00957v1
- Date: Mon, 3 Oct 2022 14:20:01 GMT
- Title: UnGANable: Defending Against GAN-based Face Manipulation
- Authors: Zheng Li and Ning Yu and Ahmed Salem and Michael Backes and Mario
Fritz and Yang Zhang
- Abstract summary: Deepfakes pose severe threats of visual misinformation to our society.
One representative deepfake application is face manipulation that modifies a victim's facial attributes in an image.
We propose the first defense system, namely UnGANable, against GAN-inversion-based face manipulation.
- Score: 69.90981797810348
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deepfakes pose severe threats of visual misinformation to our society. One
representative deepfake application is face manipulation that modifies a
victim's facial attributes in an image, e.g., changing her age or hair color.
The state-of-the-art face manipulation techniques rely on Generative
Adversarial Networks (GANs). In this paper, we propose the first defense
system, namely UnGANable, against GAN-inversion-based face manipulation. In
specific, UnGANable focuses on defending GAN inversion, an essential step for
face manipulation. Its core technique is to search for alternative images
(called cloaked images) around the original images (called target images) in
image space. When posted online, these cloaked images can jeopardize the GAN
inversion process. We consider two state-of-the-art inversion techniques
including optimization-based inversion and hybrid inversion, and design five
different defenses under five scenarios depending on the defender's background
knowledge. Extensive experiments on four popular GAN models trained on two
benchmark face datasets show that UnGANable achieves remarkable effectiveness
and utility performance, and outperforms multiple baseline methods. We further
investigate four adaptive adversaries to bypass UnGANable and show that some of
them are slightly effective.
Related papers
- ID-Guard: A Universal Framework for Combating Facial Manipulation via Breaking Identification [60.73617868629575]
misuse of deep learning-based facial manipulation poses a potential threat to civil rights.
To prevent this fraud at its source, proactive defense technology was proposed to disrupt the manipulation process.
We propose a novel universal framework for combating facial manipulation, called ID-Guard.
arXiv Detail & Related papers (2024-09-20T09:30:08Z) - ReliableSwap: Boosting General Face Swapping Via Reliable Supervision [9.725105108879717]
This paper proposes to construct reliable supervision, dubbed cycle triplets, which serves as the image-level guidance when the source identity differs from the target one during training.
Specifically, we use face reenactment and blending techniques to synthesize the swapped face from real images in advance.
Our face swapping framework, named ReliableSwap, can boost the performance of any existing face swapping network with negligible overhead.
arXiv Detail & Related papers (2023-06-08T17:01:14Z) - MorphGANFormer: Transformer-based Face Morphing and De-Morphing [55.211984079735196]
StyleGAN-based approaches to face morphing are among the leading techniques.
We propose a transformer-based alternative to face morphing and demonstrate its superiority to StyleGAN-based methods.
arXiv Detail & Related papers (2023-02-18T19:09:11Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - Initiative Defense against Facial Manipulation [82.96864888025797]
We propose a novel framework of initiative defense to degrade the performance of facial manipulation models controlled by malicious users.
We first imitate the target manipulation model with a surrogate model, and then devise a poison perturbation generator to obtain the desired venom.
arXiv Detail & Related papers (2021-12-19T09:42:28Z) - GAN Inversion: A Survey [125.62848237531945]
GAN inversion aims to invert a given image back into the latent space of a pretrained GAN model.
GAN inversion plays an essential role in enabling the pretrained GAN models such as StyleGAN and BigGAN to be used for real image editing applications.
arXiv Detail & Related papers (2021-01-14T14:11:00Z) - SLGAN: Style- and Latent-guided Generative Adversarial Network for
Desirable Makeup Transfer and Removal [44.290305928805836]
There are five features to consider when using generative adversarial networks to apply makeup to photos of the human face.
Several related works have been proposed, mainly using generative adversarial networks (GAN)
This paper closes the gap with an innovative style- and latent-guided GAN (SLGAN)
arXiv Detail & Related papers (2020-09-16T08:54:20Z) - Defending against GAN-based Deepfake Attacks via Transformation-aware
Adversarial Faces [36.87244915810356]
Deepfake represents a category of face-swapping attacks that leverage machine learning models.
We propose to use novel transformation-aware adversarially perturbed faces as a defense against Deepfake attacks.
We also propose to use an ensemble-based approach to enhance the defense robustness against GAN-based Deepfake variants.
arXiv Detail & Related papers (2020-06-12T18:51:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.