Robust Identity Perceptual Watermark Against Deepfake Face Swapping
- URL: http://arxiv.org/abs/2311.01357v2
- Date: Fri, 15 Mar 2024 14:27:00 GMT
- Title: Robust Identity Perceptual Watermark Against Deepfake Face Swapping
- Authors: Tianyi Wang, Mengxiao Huang, Harry Cheng, Bin Ma, Yinglong Wang,
- Abstract summary: Deepfake face swapping has caused critical privacy issues with the rapid development of deep generative models.
We propose the first robust identity perceptual watermarking framework that concurrently performs detection and source tracing against Deepfake face swapping.
- Score: 8.276177968730549
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Notwithstanding offering convenience and entertainment to society, Deepfake face swapping has caused critical privacy issues with the rapid development of deep generative models. Due to imperceptible artifacts in high-quality synthetic images, passive detection models against face swapping in recent years usually suffer performance damping regarding the generalizability issue. Therefore, several studies have been attempted to proactively protect the original images against malicious manipulations by inserting invisible signals in advance. However, the existing proactive defense approaches demonstrate unsatisfactory results with respect to visual quality, detection accuracy, and source tracing ability. In this study, to fulfill the research gap, we propose the first robust identity perceptual watermarking framework that concurrently performs detection and source tracing against Deepfake face swapping proactively. We assign identity semantics regarding the image contents to the watermarks and devise an unpredictable and nonreversible chaotic encryption system to ensure watermark confidentiality. The watermarks are encoded and recovered by jointly training an encoder-decoder framework along with adversarial image manipulations. Falsification and source tracing are accomplished by justifying the consistency between the content-matched identity perceptual watermark and the recovered robust watermark from the image. Extensive experiments demonstrate state-of-the-art detection performance on Deepfake face swapping under both cross-dataset and cross-manipulation settings.
Related papers
- Social Media Authentication and Combating Deepfakes using Semi-fragile Invisible Image Watermarking [6.246098300155482]
We propose a semi-fragile image watermarking technique that embeds an invisible secret message into real images for media authentication.
Our proposed framework is designed to be fragile to facial manipulations or tampering while being robust to benign image-processing operations and watermark removal attacks.
arXiv Detail & Related papers (2024-10-02T18:05:03Z) - RAW: A Robust and Agile Plug-and-Play Watermark Framework for AI-Generated Images with Provable Guarantees [33.61946642460661]
This paper introduces a robust and agile watermark detection framework, dubbed as RAW.
We employ a classifier that is jointly trained with the watermark to detect the presence of the watermark.
We show that the framework provides provable guarantees regarding the false positive rate for misclassifying a watermarked image.
arXiv Detail & Related papers (2024-01-23T22:00:49Z) - Dual Defense: Adversarial, Traceable, and Invisible Robust Watermarking
against Face Swapping [13.659927216999407]
Malicious applications of deep forgery, represented by face swapping, have introduced security threats such as misinformation dissemination and identity fraud.
We propose a novel active defense mechanism that combines traceability and adversariality, called Dual Defense.
It invisibly embeds a single robust watermark within the target face to actively respond to sudden cases of malicious face swapping.
arXiv Detail & Related papers (2023-10-25T10:39:51Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Robustness of AI-Image Detectors: Fundamental Limits and Practical
Attacks [47.04650443491879]
We analyze the robustness of various AI-image detectors including watermarking and deepfake detectors.
We show that watermarking methods are vulnerable to spoofing attacks where the attacker aims to have real images identified as watermarked ones.
arXiv Detail & Related papers (2023-09-29T18:30:29Z) - T2IW: Joint Text to Image & Watermark Generation [74.20148555503127]
We introduce a novel task for the joint generation of text to image and watermark (T2IW)
This T2IW scheme ensures minimal damage to image quality when generating a compound image by forcing the semantic feature and the watermark signal to be compatible in pixels.
We demonstrate remarkable achievements in image quality, watermark invisibility, and watermark robustness, supported by our proposed set of evaluation metrics.
arXiv Detail & Related papers (2023-09-07T16:12:06Z) - Safe and Robust Watermark Injection with a Single OoD Image [90.71804273115585]
Training a high-performance deep neural network requires large amounts of data and computational resources.
We propose a safe and robust backdoor-based watermark injection technique.
We induce random perturbation of model parameters during watermark injection to defend against common watermark removal attacks.
arXiv Detail & Related papers (2023-09-04T19:58:35Z) - Building an Invisible Shield for Your Portrait against Deepfakes [34.65356811439098]
We propose a novel framework - Integrity Encryptor, aiming to protect portraits in a proactive strategy.
Our methodology involves covertly encoding messages that are closely associated with key facial attributes into authentic images.
The modified facial attributes serve as a mean of detecting manipulated images through a comparison of the decoded messages.
arXiv Detail & Related papers (2023-05-22T10:01:28Z) - SepMark: Deep Separable Watermarking for Unified Source Tracing and
Deepfake Detection [15.54035395750232]
Malicious Deepfakes have led to a sharp conflict over distinguishing between genuine and forged faces.
We propose SepMark, which provides a unified framework for source tracing and Deepfake detection.
arXiv Detail & Related papers (2023-05-10T17:15:09Z) - Dual Spoof Disentanglement Generation for Face Anti-spoofing with Depth
Uncertainty Learning [54.15303628138665]
Face anti-spoofing (FAS) plays a vital role in preventing face recognition systems from presentation attacks.
Existing face anti-spoofing datasets lack diversity due to the insufficient identity and insignificant variance.
We propose Dual Spoof Disentanglement Generation framework to tackle this challenge by "anti-spoofing via generation"
arXiv Detail & Related papers (2021-12-01T15:36:59Z) - Reversible Watermarking in Deep Convolutional Neural Networks for
Integrity Authentication [78.165255859254]
We propose a reversible watermarking algorithm for integrity authentication.
The influence of embedding reversible watermarking on the classification performance is less than 0.5%.
At the same time, the integrity of the model can be verified by applying the reversible watermarking.
arXiv Detail & Related papers (2021-04-09T09:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.