Dual Defense: Adversarial, Traceable, and Invisible Robust Watermarking
against Face Swapping
- URL: http://arxiv.org/abs/2310.16540v1
- Date: Wed, 25 Oct 2023 10:39:51 GMT
- Title: Dual Defense: Adversarial, Traceable, and Invisible Robust Watermarking
against Face Swapping
- Authors: Yunming Zhang and Dengpan Ye and Caiyun Xie and Long Tang and Chuanxi
Chen and Ziyi Liu and Jiacheng Deng
- Abstract summary: Malicious applications of deep forgery, represented by face swapping, have introduced security threats such as misinformation dissemination and identity fraud.
We propose a novel active defense mechanism that combines traceability and adversariality, called Dual Defense.
It invisibly embeds a single robust watermark within the target face to actively respond to sudden cases of malicious face swapping.
- Score: 13.659927216999407
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The malicious applications of deep forgery, represented by face swapping,
have introduced security threats such as misinformation dissemination and
identity fraud. While some research has proposed the use of robust watermarking
methods to trace the copyright of facial images for post-event traceability,
these methods cannot effectively prevent the generation of forgeries at the
source and curb their dissemination. To address this problem, we propose a
novel comprehensive active defense mechanism that combines traceability and
adversariality, called Dual Defense. Dual Defense invisibly embeds a single
robust watermark within the target face to actively respond to sudden cases of
malicious face swapping. It disrupts the output of the face swapping model
while maintaining the integrity of watermark information throughout the entire
dissemination process. This allows for watermark extraction at any stage of
image tracking for traceability. Specifically, we introduce a watermark
embedding network based on original-domain feature impersonation attack. This
network learns robust adversarial features of target facial images and embeds
watermarks, seeking a well-balanced trade-off between watermark invisibility,
adversariality, and traceability through perceptual adversarial encoding
strategies. Extensive experiments demonstrate that Dual Defense achieves
optimal overall defense success rates and exhibits promising universality in
anti-face swapping tasks and dataset generalization ability. It maintains
impressive adversariality and traceability in both original and robust
settings, surpassing current forgery defense methods that possess only one of
these capabilities, including CMUA-Watermark, Anti-Forgery, FakeTagger, or PGD
methods.
Related papers
- LampMark: Proactive Deepfake Detection via Training-Free Landmark Perceptual Watermarks [7.965986856780787]
This paper introduces a novel training-free landmark perceptual watermark, LampMark for short.
We first analyze the structure-sensitive characteristics of Deepfake manipulations and devise a secure and confidential transformation pipeline.
We present an end-to-end watermarking framework that imperceptibly embeds and extracts watermarks concerning the images to be protected.
arXiv Detail & Related papers (2024-11-26T08:24:56Z) - Facial Features Matter: a Dynamic Watermark based Proactive Deepfake Detection Approach [11.51480331713537]
This paper proposes a Facial Feature-based Proactive deepfake detection method (FaceProtect)
We introduce a GAN-based One-way Dynamic Watermark Generating Mechanism (GODWGM) that uses 128-dimensional facial feature vectors as inputs.
We also propose a Watermark-based Verification Strategy (WVS) that combines steganography with GODWGM, allowing simultaneous transmission of the benchmark watermark.
arXiv Detail & Related papers (2024-11-22T08:49:08Z) - Adversarial Watermarking for Face Recognition [17.11307036255593]
In face recognition systems, watermarking plays a pivotal role in ensuring data integrity and security.
We explore the interaction between watermarking and adversarial attacks on face recognition models.
Our proposed adversarial watermarking attack reduces face matching accuracy by 67.2%.
arXiv Detail & Related papers (2024-09-24T12:58:32Z) - ID-Guard: A Universal Framework for Combating Facial Manipulation via Breaking Identification [60.73617868629575]
misuse of deep learning-based facial manipulation poses a potential threat to civil rights.
To prevent this fraud at its source, proactive defense technology was proposed to disrupt the manipulation process.
We propose a novel universal framework for combating facial manipulation, called ID-Guard.
arXiv Detail & Related papers (2024-09-20T09:30:08Z) - DIP-Watermark: A Double Identity Protection Method Based on Robust Adversarial Watermark [13.007649270429493]
Face Recognition (FR) systems pose privacy risks.
One countermeasure is adversarial attack, deceiving unauthorized malicious FR.
We propose the first double identity protection scheme based on traceable adversarial watermarking.
arXiv Detail & Related papers (2024-04-23T02:50:38Z) - Robust Identity Perceptual Watermark Against Deepfake Face Swapping [8.276177968730549]
Deepfake face swapping has caused critical privacy issues with the rapid development of deep generative models.
We propose the first robust identity perceptual watermarking framework that concurrently performs detection and source tracing against Deepfake face swapping.
arXiv Detail & Related papers (2023-11-02T16:04:32Z) - T2IW: Joint Text to Image & Watermark Generation [74.20148555503127]
We introduce a novel task for the joint generation of text to image and watermark (T2IW)
This T2IW scheme ensures minimal damage to image quality when generating a compound image by forcing the semantic feature and the watermark signal to be compatible in pixels.
We demonstrate remarkable achievements in image quality, watermark invisibility, and watermark robustness, supported by our proposed set of evaluation metrics.
arXiv Detail & Related papers (2023-09-07T16:12:06Z) - Safe and Robust Watermark Injection with a Single OoD Image [90.71804273115585]
Training a high-performance deep neural network requires large amounts of data and computational resources.
We propose a safe and robust backdoor-based watermark injection technique.
We induce random perturbation of model parameters during watermark injection to defend against common watermark removal attacks.
arXiv Detail & Related papers (2023-09-04T19:58:35Z) - Invisible Image Watermarks Are Provably Removable Using Generative AI [47.25747266531665]
Invisible watermarks safeguard images' copyrights by embedding hidden messages only detectable by owners.
We propose a family of regeneration attacks to remove these invisible watermarks.
The proposed attack method first adds random noise to an image to destroy the watermark and then reconstructs the image.
arXiv Detail & Related papers (2023-06-02T23:29:28Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z) - CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for
Combating Deepfakes [74.18502861399591]
Malicious application of deepfakes (i.e., technologies can generate target faces or face attributes) has posed a huge threat to our society.
We propose a universal adversarial attack method on deepfake models, to generate a Cross-Model Universal Adversarial Watermark (CMUA-Watermark)
Experimental results demonstrate that the proposed CMUA-Watermark can effectively distort the fake facial images generated by deepfake models.
arXiv Detail & Related papers (2021-05-23T07:28:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.