Restricted Black-box Adversarial Attack Against DeepFake Face Swapping
- URL: http://arxiv.org/abs/2204.12347v1
- Date: Tue, 26 Apr 2022 14:36:06 GMT
- Title: Restricted Black-box Adversarial Attack Against DeepFake Face Swapping
- Authors: Junhao Dong, Yuan Wang, Jianhuang Lai, Xiaohua Xie
- Abstract summary: We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
- Score: 70.82017781235535
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: DeepFake face swapping presents a significant threat to online security and
social media, which can replace the source face in an arbitrary photo/video
with the target face of an entirely different person. In order to prevent this
fraud, some researchers have begun to study the adversarial methods against
DeepFake or face manipulation. However, existing works focus on the white-box
setting or the black-box setting driven by abundant queries, which severely
limits the practical application of these methods. To tackle this problem, we
introduce a practical adversarial attack that does not require any queries to
the facial image forgery model. Our method is built on a substitute model
persuing for face reconstruction and then transfers adversarial examples from
the substitute model directly to inaccessible black-box DeepFake models.
Specially, we propose the Transferable Cycle Adversary Generative Adversarial
Network (TCA-GAN) to construct the adversarial perturbation for disrupting
unknown DeepFake systems. We also present a novel post-regularization module
for enhancing the transferability of generated adversarial examples. To
comprehensively measure the effectiveness of our approaches, we construct a
challenging benchmark of DeepFake adversarial attacks for future development.
Extensive experiments impressively show that the proposed adversarial attack
method makes the visual quality of DeepFake face images plummet so that they
are easier to be detected by humans and algorithms. Moreover, we demonstrate
that the proposed algorithm can be generalized to offer face image protection
against various face translation methods.
Related papers
- ID-Guard: A Universal Framework for Combating Facial Manipulation via Breaking Identification [60.73617868629575]
misuse of deep learning-based facial manipulation poses a potential threat to civil rights.
To prevent this fraud at its source, proactive defense technology was proposed to disrupt the manipulation process.
We propose a novel universal framework for combating facial manipulation, called ID-Guard.
arXiv Detail & Related papers (2024-09-20T09:30:08Z) - Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - FakeTracer: Catching Face-swap DeepFakes via Implanting Traces in Training [36.158715089667034]
Face-swap DeepFake is an emerging AI-based face forgery technique.
Due to the high privacy of faces, the misuse of this technique can raise severe social concerns.
We describe a new proactive defense method called FakeTracer to expose face-swap DeepFakes via implanting traces in training.
arXiv Detail & Related papers (2023-07-27T02:36:13Z) - Information-containing Adversarial Perturbation for Combating Facial
Manipulation Systems [19.259372985094235]
Malicious applications of deep learning systems pose a serious threat to individuals' privacy and reputation.
We propose a novel two-tier protection method named Information-containing Adversarial Perturbation (IAP)
We use an encoder to map a facial image and its identity message to a cross-model adversarial example which can disrupt multiple facial manipulation systems.
arXiv Detail & Related papers (2023-03-21T06:48:14Z) - Initiative Defense against Facial Manipulation [82.96864888025797]
We propose a novel framework of initiative defense to degrade the performance of facial manipulation models controlled by malicious users.
We first imitate the target manipulation model with a surrogate model, and then devise a poison perturbation generator to obtain the desired venom.
arXiv Detail & Related papers (2021-12-19T09:42:28Z) - CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for
Combating Deepfakes [74.18502861399591]
Malicious application of deepfakes (i.e., technologies can generate target faces or face attributes) has posed a huge threat to our society.
We propose a universal adversarial attack method on deepfake models, to generate a Cross-Model Universal Adversarial Watermark (CMUA-Watermark)
Experimental results demonstrate that the proposed CMUA-Watermark can effectively distort the fake facial images generated by deepfake models.
arXiv Detail & Related papers (2021-05-23T07:28:36Z) - Vulnerability of Face Recognition Systems Against Composite Face
Reconstruction Attack [3.3707422585608953]
Rounding confidence score is considered trivial but a simple and effective countermeasure to stop gradient descent based image reconstruction attacks.
In this paper, we prove that, the face reconstruction attacks based on composite faces can reveal the inefficiency of rounding policy as countermeasure.
arXiv Detail & Related papers (2020-08-23T03:37:51Z) - Defending against GAN-based Deepfake Attacks via Transformation-aware
Adversarial Faces [36.87244915810356]
Deepfake represents a category of face-swapping attacks that leverage machine learning models.
We propose to use novel transformation-aware adversarially perturbed faces as a defense against Deepfake attacks.
We also propose to use an ensemble-based approach to enhance the defense robustness against GAN-based Deepfake variants.
arXiv Detail & Related papers (2020-06-12T18:51:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.