Defending against GAN-based Deepfake Attacks via Transformation-aware
Adversarial Faces
- URL: http://arxiv.org/abs/2006.07421v1
- Date: Fri, 12 Jun 2020 18:51:57 GMT
- Title: Defending against GAN-based Deepfake Attacks via Transformation-aware
Adversarial Faces
- Authors: Chaofei Yang, Lei Ding, Yiran Chen, Hai Li
- Abstract summary: Deepfake represents a category of face-swapping attacks that leverage machine learning models.
We propose to use novel transformation-aware adversarially perturbed faces as a defense against Deepfake attacks.
We also propose to use an ensemble-based approach to enhance the defense robustness against GAN-based Deepfake variants.
- Score: 36.87244915810356
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deepfake represents a category of face-swapping attacks that leverage machine
learning models such as autoencoders or generative adversarial networks.
Although the concept of the face-swapping is not new, its recent technical
advances make fake content (e.g., images, videos) more realistic and
imperceptible to Humans. Various detection techniques for Deepfake attacks have
been explored. These methods, however, are passive measures against Deepfakes
as they are mitigation strategies after the high-quality fake content is
generated. More importantly, we would like to think ahead of the attackers with
robust defenses. This work aims to take an offensive measure to impede the
generation of high-quality fake images or videos. Specifically, we propose to
use novel transformation-aware adversarially perturbed faces as a defense
against GAN-based Deepfake attacks. Different from the naive adversarial faces,
our proposed approach leverages differentiable random image transformations
during the generation. We also propose to use an ensemble-based approach to
enhance the defense robustness against GAN-based Deepfake variants under the
black-box setting. We show that training a Deepfake model with adversarial
faces can lead to a significant degradation in the quality of synthesized
faces. This degradation is twofold. On the one hand, the quality of the
synthesized faces is reduced with more visual artifacts such that the
synthesized faces are more obviously fake or less convincing to human
observers. On the other hand, the synthesized faces can easily be detected
based on various metrics.
Related papers
- Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Evading Forensic Classifiers with Attribute-Conditioned Adversarial
Faces [6.105361899083232]
We show that it is possible to successfully generate adversarial fake faces with a specified set of attributes.
We propose a framework to search for adversarial latent codes within the feature space of StyleGAN.
We also propose a meta-learning based optimization strategy to achieve transferable performance on unknown target models.
arXiv Detail & Related papers (2023-06-22T17:59:55Z) - UnGANable: Defending Against GAN-based Face Manipulation [69.90981797810348]
Deepfakes pose severe threats of visual misinformation to our society.
One representative deepfake application is face manipulation that modifies a victim's facial attributes in an image.
We propose the first defense system, namely UnGANable, against GAN-inversion-based face manipulation.
arXiv Detail & Related papers (2022-10-03T14:20:01Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for
Combating Deepfakes [74.18502861399591]
Malicious application of deepfakes (i.e., technologies can generate target faces or face attributes) has posed a huge threat to our society.
We propose a universal adversarial attack method on deepfake models, to generate a Cross-Model Universal Adversarial Watermark (CMUA-Watermark)
Experimental results demonstrate that the proposed CMUA-Watermark can effectively distort the fake facial images generated by deepfake models.
arXiv Detail & Related papers (2021-05-23T07:28:36Z) - Exploring Adversarial Fake Images on Face Manifold [5.26916168336451]
Images synthesized by powerful generative adversarial network (GAN) based methods have drawn moral and privacy concerns.
In this paper, instead of adding adversarial noise, we optimally search adversarial points on face manifold to generate anti-forensic fake face images.
arXiv Detail & Related papers (2021-01-09T02:08:59Z) - Online Alternate Generator against Adversarial Attacks [144.45529828523408]
Deep learning models are notoriously sensitive to adversarial examples which are synthesized by adding quasi-perceptible noises on real images.
We propose a portable defense method, online alternate generator, which does not need to access or modify the parameters of the target networks.
The proposed method works by online synthesizing another image from scratch for an input image, instead of removing or destroying adversarial noises.
arXiv Detail & Related papers (2020-09-17T07:11:16Z) - OGAN: Disrupting Deepfakes with an Adversarial Attack that Survives
Training [0.0]
We introduce a class of adversarial attacks that can disrupt face-swapping autoencoders.
We propose the Oscillating GAN (OGAN) attack, a novel attack optimized to be training-resistant.
These results demonstrate the existence of training-resistant adversarial attacks, potentially applicable to a wide range of domains.
arXiv Detail & Related papers (2020-06-17T17:18:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.