Big Brother is Watching: Proactive Deepfake Detection via Learnable Hidden Face
- URL: http://arxiv.org/abs/2504.11309v1
- Date: Tue, 15 Apr 2025 15:50:54 GMT
- Title: Big Brother is Watching: Proactive Deepfake Detection via Learnable Hidden Face
- Authors: Hongbo Li, Shangchao Yang, Ruiyang Xia, Lin Yuan, Xinbo Gao,
- Abstract summary: Secret template image is embedded into a host image imperceptibly, acting as an indicator monitoring for any malicious image forgery.<n>A robust detector is built to accurately distinguish if the steganographic image is maliciously tampered or benignly processed.
- Score: 40.40196403891759
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As deepfake technologies continue to advance, passive detection methods struggle to generalize with various forgery manipulations and datasets. Proactive defense techniques have been actively studied with the primary aim of preventing deepfake operation effectively working. In this paper, we aim to bridge the gap between passive detection and proactive defense, and seek to solve the detection problem utilizing a proactive methodology. Inspired by several watermarking-based forensic methods, we explore a novel detection framework based on the concept of ``hiding a learnable face within a face''. Specifically, relying on a semi-fragile invertible steganography network, a secret template image is embedded into a host image imperceptibly, acting as an indicator monitoring for any malicious image forgery when being restored by the inverse steganography process. Instead of being manually specified, the secret template is optimized during training to resemble a neutral facial appearance, just like a ``big brother'' hidden in the image to be protected. By incorporating a self-blending mechanism and robustness learning strategy with a simulative transmission channel, a robust detector is built to accurately distinguish if the steganographic image is maliciously tampered or benignly processed. Finally, extensive experiments conducted on multiple datasets demonstrate the superiority of the proposed approach over competing passive and proactive detection methods.
Related papers
- Facial Features Matter: a Dynamic Watermark based Proactive Deepfake Detection Approach [11.51480331713537]
This paper proposes a Facial Feature-based Proactive deepfake detection method (FaceProtect)
We introduce a GAN-based One-way Dynamic Watermark Generating Mechanism (GODWGM) that uses 128-dimensional facial feature vectors as inputs.
We also propose a Watermark-based Verification Strategy (WVS) that combines steganography with GODWGM, allowing simultaneous transmission of the benchmark watermark.
arXiv Detail & Related papers (2024-11-22T08:49:08Z) - Natias: Neuron Attribution based Transferable Image Adversarial Steganography [62.906821876314275]
adversarial steganography has garnered considerable attention due to its ability to effectively deceive deep-learning-based steganalysis.
We propose a novel adversarial steganographic scheme named Natias.
Our proposed method can be seamlessly integrated with existing adversarial steganography frameworks.
arXiv Detail & Related papers (2024-09-08T04:09:51Z) - UniForensics: Face Forgery Detection via General Facial Representation [60.5421627990707]
High-level semantic features are less susceptible to perturbations and not limited to forgery-specific artifacts, thus having stronger generalization.
We introduce UniForensics, a novel deepfake detection framework that leverages a transformer-based video network, with a meta-functional face classification for enriched facial representation.
arXiv Detail & Related papers (2024-07-26T20:51:54Z) - Robust Identity Perceptual Watermark Against Deepfake Face Swapping [8.276177968730549]
Deepfake face swapping has caused critical privacy issues with the rapid development of deep generative models.
We propose the first robust identity perceptual watermarking framework that concurrently performs detection and source tracing against Deepfake face swapping.
arXiv Detail & Related papers (2023-11-02T16:04:32Z) - On the Vulnerability of DeepFake Detectors to Attacks Generated by
Denoising Diffusion Models [0.5827521884806072]
We investigate the vulnerability of single-image deepfake detectors to black-box attacks created by the newest generation of generative methods.
Our experiments are run on FaceForensics++, a widely used deepfake benchmark consisting of manipulated images.
Our findings indicate that employing just a single denoising diffusion step in the reconstruction process of a deepfake can significantly reduce the likelihood of detection.
arXiv Detail & Related papers (2023-07-11T15:57:51Z) - Building an Invisible Shield for Your Portrait against Deepfakes [34.65356811439098]
We propose a novel framework - Integrity Encryptor, aiming to protect portraits in a proactive strategy.
Our methodology involves covertly encoding messages that are closely associated with key facial attributes into authentic images.
The modified facial attributes serve as a mean of detecting manipulated images through a comparison of the decoded messages.
arXiv Detail & Related papers (2023-05-22T10:01:28Z) - Information-containing Adversarial Perturbation for Combating Facial
Manipulation Systems [19.259372985094235]
Malicious applications of deep learning systems pose a serious threat to individuals' privacy and reputation.
We propose a novel two-tier protection method named Information-containing Adversarial Perturbation (IAP)
We use an encoder to map a facial image and its identity message to a cross-model adversarial example which can disrupt multiple facial manipulation systems.
arXiv Detail & Related papers (2023-03-21T06:48:14Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - Self-supervised Transformer for Deepfake Detection [112.81127845409002]
Deepfake techniques in real-world scenarios require stronger generalization abilities of face forgery detectors.
Inspired by transfer learning, neural networks pre-trained on other large-scale face-related tasks may provide useful features for deepfake detection.
In this paper, we propose a self-supervised transformer based audio-visual contrastive learning method.
arXiv Detail & Related papers (2022-03-02T17:44:40Z) - Detect and Locate: A Face Anti-Manipulation Approach with Semantic and
Noise-level Supervision [67.73180660609844]
We propose a conceptually simple but effective method to efficiently detect forged faces in an image.
The proposed scheme relies on a segmentation map that delivers meaningful high-level semantic information clues about the image.
The proposed model achieves state-of-the-art detection accuracy and remarkable localization performance.
arXiv Detail & Related papers (2021-07-13T02:59:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.