DeepForgeSeal: Latent Space-Driven Semi-Fragile Watermarking for Deepfake Detection Using Multi-Agent Adversarial Reinforcement Learning
- URL: http://arxiv.org/abs/2511.04949v1
- Date: Fri, 07 Nov 2025 03:24:50 GMT
- Title: DeepForgeSeal: Latent Space-Driven Semi-Fragile Watermarking for Deepfake Detection Using Multi-Agent Adversarial Reinforcement Learning
- Authors: Tharindu Fernando, Clinton Fookes, Sridha Sridharan,
- Abstract summary: This paper introduces a novel deep learning framework that harnesses high-dimensional latent space representations and the Multi-Agent Adversarial Reinforcement Learning paradigm.<n>We develop a learnable watermark embedder that operates in the latent space, capturing high-level image semantics.<n> Comprehensive evaluations on the CelebA and CelebA-HQ benchmarks reveal that our method consistently outperforms state-of-the-art approaches.
- Score: 32.222640070452464
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rapid advances in generative AI have led to increasingly realistic deepfakes, posing growing challenges for law enforcement and public trust. Existing passive deepfake detectors struggle to keep pace, largely due to their dependence on specific forgery artifacts, which limits their ability to generalize to new deepfake types. Proactive deepfake detection using watermarks has emerged to address the challenge of identifying high-quality synthetic media. However, these methods often struggle to balance robustness against benign distortions with sensitivity to malicious tampering. This paper introduces a novel deep learning framework that harnesses high-dimensional latent space representations and the Multi-Agent Adversarial Reinforcement Learning (MAARL) paradigm to develop a robust and adaptive watermarking approach. Specifically, we develop a learnable watermark embedder that operates in the latent space, capturing high-level image semantics, while offering precise control over message encoding and extraction. The MAARL paradigm empowers the learnable watermarking agent to pursue an optimal balance between robustness and fragility by interacting with a dynamic curriculum of benign and malicious image manipulations simulated by an adversarial attacker agent. Comprehensive evaluations on the CelebA and CelebA-HQ benchmarks reveal that our method consistently outperforms state-of-the-art approaches, achieving improvements of over 4.5% on CelebA and more than 5.3% on CelebA-HQ under challenging manipulation scenarios.
Related papers
- Character-Level Perturbations Disrupt LLM Watermarks [64.60090923837701]
We formalize the system model for Large Language Model (LLM) watermarking.<n>We characterize two realistic threat models constrained on limited access to the watermark detector.<n>We demonstrate character-level perturbations are significantly more effective for watermark removal under the most restrictive threat model.<n> Experiments confirm the superiority of character-level perturbations and the effectiveness of the Genetic Algorithm (GA) in removing watermarks under realistic constraints.
arXiv Detail & Related papers (2025-09-11T02:50:07Z) - Semantic Watermarking Reinvented: Enhancing Robustness and Generation Quality with Fourier Integrity [31.666430190864947]
We propose a novel embedding method called Hermitian Symmetric Fourier Watermarking (SFW)<n>SFW maintains frequency integrity by enforcing Hermitian symmetry.<n>We introduce a center-aware embedding strategy that reduces the vulnerability of semantic watermarking due to cropping attacks.
arXiv Detail & Related papers (2025-09-09T12:15:16Z) - Uncovering and Mitigating Destructive Multi-Embedding Attacks in Deepfake Proactive Forensics [17.112388802067425]
proactive forensics involves embedding imperceptible watermarks to enable reliable source tracking.<n>Existing methods rely on an idealized assumption of single watermark embedding, which proves impractical in real-world scenarios.<n>We propose a general training paradigm named Adversarial Interference Simulation (AIS) to address the vulnerability.<n>Our method enables the model to maintain the ability to extract the original watermark correctly even after a second embedding.
arXiv Detail & Related papers (2025-08-24T07:57:32Z) - ForensicsSAM: Toward Robust and Unified Image Forgery Detection and Localization Resisting to Adversarial Attack [56.0056378072843]
We show that highly transferable adversarial images can be crafted solely via the upstream model.<n>We propose ForensicsSAM, a unified IFDL framework with built-in adversarial robustness.
arXiv Detail & Related papers (2025-08-10T16:03:44Z) - RAID: A Dataset for Testing the Adversarial Robustness of AI-Generated Image Detectors [57.81012948133832]
We present RAID (Robust evaluation of AI-generated image Detectors), a dataset of 72k diverse and highly transferable adversarial examples.<n>Our methodology generates adversarial images that transfer with a high success rate to unseen detectors.<n>Our findings indicate that current state-of-the-art AI-generated image detectors can be easily deceived by adversarial examples.
arXiv Detail & Related papers (2025-06-04T14:16:00Z) - A Knowledge-guided Adversarial Defense for Resisting Malicious Visual Manipulation [93.28532038721816]
Malicious applications of visual manipulation have raised serious threats to the security and reputation of users in many fields.<n>We propose a knowledge-guided adversarial defense (KGAD) to actively force malicious manipulation models to output semantically confusing samples.
arXiv Detail & Related papers (2025-04-11T10:18:13Z) - Social Media Authentication and Combating Deepfakes using Semi-fragile Invisible Image Watermarking [6.246098300155482]
We propose a semi-fragile image watermarking technique that embeds an invisible secret message into real images for media authentication.
Our proposed framework is designed to be fragile to facial manipulations or tampering while being robust to benign image-processing operations and watermark removal attacks.
arXiv Detail & Related papers (2024-10-02T18:05:03Z) - Deepfake Sentry: Harnessing Ensemble Intelligence for Resilient Detection and Generalisation [0.8796261172196743]
We propose a proactive and sustainable deepfake training augmentation solution.
We employ a pool of autoencoders that mimic the effect of the artefacts introduced by the deepfake generator models.
Experiments reveal that our proposed ensemble autoencoder-based data augmentation learning approach offers improvements in terms of generalisation.
arXiv Detail & Related papers (2024-03-29T19:09:08Z) - Robust Identity Perceptual Watermark Against Deepfake Face Swapping [9.402982368385569]
Deepfake face swapping has caused critical privacy issues with the rapid development of deep generative models.<n>We propose a robust identity perceptual watermarking framework that concurrently performs detection and source tracing against Deepfake face swapping.
arXiv Detail & Related papers (2023-11-02T16:04:32Z) - Safe and Robust Watermark Injection with a Single OoD Image [90.71804273115585]
Training a high-performance deep neural network requires large amounts of data and computational resources.
We propose a safe and robust backdoor-based watermark injection technique.
We induce random perturbation of model parameters during watermark injection to defend against common watermark removal attacks.
arXiv Detail & Related papers (2023-09-04T19:58:35Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.