Initiative Defense against Facial Manipulation
- URL: http://arxiv.org/abs/2112.10098v1
- Date: Sun, 19 Dec 2021 09:42:28 GMT
- Title: Initiative Defense against Facial Manipulation
- Authors: Qidong Huang, Jie Zhang, Wenbo Zhou, WeimingZhang, Nenghai Yu
- Abstract summary: We propose a novel framework of initiative defense to degrade the performance of facial manipulation models controlled by malicious users.
We first imitate the target manipulation model with a surrogate model, and then devise a poison perturbation generator to obtain the desired venom.
- Score: 82.96864888025797
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Benefiting from the development of generative adversarial networks (GAN),
facial manipulation has achieved significant progress in both academia and
industry recently. It inspires an increasing number of entertainment
applications but also incurs severe threats to individual privacy and even
political security meanwhile. To mitigate such risks, many countermeasures have
been proposed. However, the great majority methods are designed in a passive
manner, which is to detect whether the facial images or videos are tampered
after their wide propagation. These detection-based methods have a fatal
limitation, that is, they only work for ex-post forensics but can not prevent
the engendering of malicious behavior. To address the limitation, in this
paper, we propose a novel framework of initiative defense to degrade the
performance of facial manipulation models controlled by malicious users. The
basic idea is to actively inject imperceptible venom into target facial data
before manipulation. To this end, we first imitate the target manipulation
model with a surrogate model, and then devise a poison perturbation generator
to obtain the desired venom. An alternating training strategy are further
leveraged to train both the surrogate model and the perturbation generator. Two
typical facial manipulation tasks: face attribute editing and face reenactment,
are considered in our initiative defense framework. Extensive experiments
demonstrate the effectiveness and robustness of our framework in different
settings. Finally, we hope this work can shed some light on initiative
countermeasures against more adversarial scenarios.
Related papers
- ID-Guard: A Universal Framework for Combating Facial Manipulation via Breaking Identification [60.73617868629575]
misuse of deep learning-based facial manipulation poses a potential threat to civil rights.
To prevent this fraud at its source, proactive defense technology was proposed to disrupt the manipulation process.
We propose a novel universal framework for combating facial manipulation, called ID-Guard.
arXiv Detail & Related papers (2024-09-20T09:30:08Z) - MakeupAttack: Feature Space Black-box Backdoor Attack on Face Recognition via Makeup Transfer [6.6251662169603005]
We propose a novel feature backdoor attack against face recognition via makeup transfer, dubbed MakeupAttack.
In our attack, we design an iterative training paradigm to learn the subtle features of the proposed makeup-style trigger.
The results demonstrate that our proposed attack method can bypass existing state-of-the-art defenses while maintaining effectiveness, robustness, naturalness, and stealthiness, without compromising model performance.
arXiv Detail & Related papers (2024-08-22T11:39:36Z) - Principles of Designing Robust Remote Face Anti-Spoofing Systems [60.05766968805833]
This paper sheds light on the vulnerabilities of state-of-the-art face anti-spoofing methods against digital attacks.
It presents a comprehensive taxonomy of common threats encountered in face anti-spoofing systems.
arXiv Detail & Related papers (2024-06-06T02:05:35Z) - Poisoned Forgery Face: Towards Backdoor Attacks on Face Forgery
Detection [62.595450266262645]
This paper introduces a novel and previously unrecognized threat in face forgery detection scenarios caused by backdoor attack.
By embedding backdoors into models, attackers can deceive detectors into producing erroneous predictions for forged faces.
We propose emphPoisoned Forgery Face framework, which enables clean-label backdoor attacks on face forgery detectors.
arXiv Detail & Related papers (2024-02-18T06:31:05Z) - Information-containing Adversarial Perturbation for Combating Facial
Manipulation Systems [19.259372985094235]
Malicious applications of deep learning systems pose a serious threat to individuals' privacy and reputation.
We propose a novel two-tier protection method named Information-containing Adversarial Perturbation (IAP)
We use an encoder to map a facial image and its identity message to a cross-model adversarial example which can disrupt multiple facial manipulation systems.
arXiv Detail & Related papers (2023-03-21T06:48:14Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.