FaceGuard: Proactive Deepfake Detection
- URL: http://arxiv.org/abs/2109.05673v1
- Date: Mon, 13 Sep 2021 02:36:25 GMT
- Title: FaceGuard: Proactive Deepfake Detection
- Authors: Yuankun Yang, Chenyue Liang, Hongyu He, Xiaoyu Cao, Neil Zhenqiang
Gong
- Abstract summary: We propose FaceGuard, a proactive deepfake-detection framework.
FaceGuard embeds a watermark into a real face image before it is published on social media.
It predicts the face image to be fake if the extracted watermark does not match well with the individual's ground truth one.
- Score: 15.938409771740643
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing deepfake-detection methods focus on passive detection, i.e., they
detect fake face images via exploiting the artifacts produced during deepfake
manipulation. A key limitation of passive detection is that it cannot detect
fake faces that are generated by new deepfake generation methods. In this work,
we propose FaceGuard, a proactive deepfake-detection framework. FaceGuard
embeds a watermark into a real face image before it is published on social
media. Given a face image that claims to be an individual (e.g., Nicolas Cage),
FaceGuard extracts a watermark from it and predicts the face image to be fake
if the extracted watermark does not match well with the individual's ground
truth one. A key component of FaceGuard is a new deep-learning-based
watermarking method, which is 1) robust to normal image post-processing such as
JPEG compression, Gaussian blurring, cropping, and resizing, but 2) fragile to
deepfake manipulation. Our evaluation on multiple datasets shows that FaceGuard
can detect deepfakes accurately and outperforms existing methods.
Related papers
- Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Turn Fake into Real: Adversarial Head Turn Attacks Against Deepfake
Detection [58.1263969438364]
We propose adversarial head turn (AdvHeat) as the first attempt at 3D adversarial face views against deepfake detectors.
Experiments validate the vulnerability of various detectors to AdvHeat in realistic, black-box scenarios.
Additional analyses demonstrate that AdvHeat is better than conventional attacks on both the cross-detector transferability and robustness to defenses.
arXiv Detail & Related papers (2023-09-03T07:01:34Z) - SepMark: Deep Separable Watermarking for Unified Source Tracing and
Deepfake Detection [15.54035395750232]
Malicious Deepfakes have led to a sharp conflict over distinguishing between genuine and forged faces.
We propose SepMark, which provides a unified framework for source tracing and Deepfake detection.
arXiv Detail & Related papers (2023-05-10T17:15:09Z) - Docmarking: Real-Time Screen-Cam Robust Document Image Watermarking [97.77394585669562]
Proposed approach does not try to prevent leak in the first place but rather aims to determine source of the leak.
Method works by applying on the screen a unique identifying watermark as semi-transparent image.
Watermark image is static and stays on the screen all the time thus watermark present on every captured photograph of the screen.
arXiv Detail & Related papers (2023-04-25T09:32:11Z) - Mover: Mask and Recovery based Facial Part Consistency Aware Method for
Deepfake Video Detection [33.29744034340998]
Mover is a new Deepfake detection model that exploits unspecific facial part inconsistencies.
We propose a novel model with dual networks that utilize the pretrained encoder and masked autoencoder.
Our experiments on standard benchmarks demonstrate that Mover is highly effective.
arXiv Detail & Related papers (2023-03-03T06:57:22Z) - Certified Neural Network Watermarks with Randomized Smoothing [64.86178395240469]
We propose a certifiable watermarking method for deep learning models.
We show that our watermark is guaranteed to be unremovable unless the model parameters are changed by more than a certain l2 threshold.
Our watermark is also empirically more robust compared to previous watermarking methods.
arXiv Detail & Related papers (2022-07-16T16:06:59Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - FaceSigns: Semi-Fragile Neural Watermarks for Media Authentication and
Countering Deepfakes [25.277040616599336]
Deepfakes and manipulated media are becoming a prominent threat due to the recent advances in realistic image and video synthesis techniques.
We introduce a deep learning based semi-fragile watermarking technique that allows media authentication by verifying an invisible secret message embedded in the image pixels.
arXiv Detail & Related papers (2022-04-05T03:29:30Z) - Deepfake Detection for Facial Images with Facemasks [17.238556058316412]
We thoroughly evaluate the performance of state-of-the-art deepfake detection models on the deepfakes withthe facemask.
We propose two approaches to enhance themasked deepfakes detection:face-patchandface-crop.
arXiv Detail & Related papers (2022-02-23T09:01:27Z) - Understanding the Security of Deepfake Detection [23.118012417901078]
We study the security of state-of-the-art deepfake detection methods in adversarial settings.
We use two large-scale public deepfakes data sources including FaceForensics++ and Facebook Deepfake Detection Challenge.
Our results uncover multiple security limitations of the deepfake detection methods in adversarial settings.
arXiv Detail & Related papers (2021-07-05T14:18:21Z) - CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for
Combating Deepfakes [74.18502861399591]
Malicious application of deepfakes (i.e., technologies can generate target faces or face attributes) has posed a huge threat to our society.
We propose a universal adversarial attack method on deepfake models, to generate a Cross-Model Universal Adversarial Watermark (CMUA-Watermark)
Experimental results demonstrate that the proposed CMUA-Watermark can effectively distort the fake facial images generated by deepfake models.
arXiv Detail & Related papers (2021-05-23T07:28:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.