FaceGuard: Proactive Deepfake Detection
- URL: http://arxiv.org/abs/2109.05673v1
- Date: Mon, 13 Sep 2021 02:36:25 GMT
- Title: FaceGuard: Proactive Deepfake Detection
- Authors: Yuankun Yang, Chenyue Liang, Hongyu He, Xiaoyu Cao, Neil Zhenqiang
Gong
- Abstract summary: We propose FaceGuard, a proactive deepfake-detection framework.
FaceGuard embeds a watermark into a real face image before it is published on social media.
It predicts the face image to be fake if the extracted watermark does not match well with the individual's ground truth one.
- Score: 15.938409771740643
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing deepfake-detection methods focus on passive detection, i.e., they
detect fake face images via exploiting the artifacts produced during deepfake
manipulation. A key limitation of passive detection is that it cannot detect
fake faces that are generated by new deepfake generation methods. In this work,
we propose FaceGuard, a proactive deepfake-detection framework. FaceGuard
embeds a watermark into a real face image before it is published on social
media. Given a face image that claims to be an individual (e.g., Nicolas Cage),
FaceGuard extracts a watermark from it and predicts the face image to be fake
if the extracted watermark does not match well with the individual's ground
truth one. A key component of FaceGuard is a new deep-learning-based
watermarking method, which is 1) robust to normal image post-processing such as
JPEG compression, Gaussian blurring, cropping, and resizing, but 2) fragile to
deepfake manipulation. Our evaluation on multiple datasets shows that FaceGuard
can detect deepfakes accurately and outperforms existing methods.
Related papers
- LampMark: Proactive Deepfake Detection via Training-Free Landmark Perceptual Watermarks [7.965986856780787]
This paper introduces a novel training-free landmark perceptual watermark, LampMark for short.
We first analyze the structure-sensitive characteristics of Deepfake manipulations and devise a secure and confidential transformation pipeline.
We present an end-to-end watermarking framework that imperceptibly embeds and extracts watermarks concerning the images to be protected.
arXiv Detail & Related papers (2024-11-26T08:24:56Z) - Facial Features Matter: a Dynamic Watermark based Proactive Deepfake Detection Approach [11.51480331713537]
This paper proposes a Facial Feature-based Proactive deepfake detection method (FaceProtect)
We introduce a GAN-based One-way Dynamic Watermark Generating Mechanism (GODWGM) that uses 128-dimensional facial feature vectors as inputs.
We also propose a Watermark-based Verification Strategy (WVS) that combines steganography with GODWGM, allowing simultaneous transmission of the benchmark watermark.
arXiv Detail & Related papers (2024-11-22T08:49:08Z) - Deepfake detection in videos with multiple faces using geometric-fakeness features [79.16635054977068]
Deepfakes of victims or public figures can be used by fraudsters for blackmailing, extorsion and financial fraud.
In our research we propose to use geometric-fakeness features (GFF) that characterize a dynamic degree of a face presence in a video.
We employ our approach to analyze videos with multiple faces that are simultaneously present in a video.
arXiv Detail & Related papers (2024-10-10T13:10:34Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Docmarking: Real-Time Screen-Cam Robust Document Image Watermarking [97.77394585669562]
Proposed approach does not try to prevent leak in the first place but rather aims to determine source of the leak.
Method works by applying on the screen a unique identifying watermark as semi-transparent image.
Watermark image is static and stays on the screen all the time thus watermark present on every captured photograph of the screen.
arXiv Detail & Related papers (2023-04-25T09:32:11Z) - Certified Neural Network Watermarks with Randomized Smoothing [64.86178395240469]
We propose a certifiable watermarking method for deep learning models.
We show that our watermark is guaranteed to be unremovable unless the model parameters are changed by more than a certain l2 threshold.
Our watermark is also empirically more robust compared to previous watermarking methods.
arXiv Detail & Related papers (2022-07-16T16:06:59Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - FaceSigns: Semi-Fragile Neural Watermarks for Media Authentication and
Countering Deepfakes [25.277040616599336]
Deepfakes and manipulated media are becoming a prominent threat due to the recent advances in realistic image and video synthesis techniques.
We introduce a deep learning based semi-fragile watermarking technique that allows media authentication by verifying an invisible secret message embedded in the image pixels.
arXiv Detail & Related papers (2022-04-05T03:29:30Z) - Deepfake Detection for Facial Images with Facemasks [17.238556058316412]
We thoroughly evaluate the performance of state-of-the-art deepfake detection models on the deepfakes withthe facemask.
We propose two approaches to enhance themasked deepfakes detection:face-patchandface-crop.
arXiv Detail & Related papers (2022-02-23T09:01:27Z) - Understanding the Security of Deepfake Detection [23.118012417901078]
We study the security of state-of-the-art deepfake detection methods in adversarial settings.
We use two large-scale public deepfakes data sources including FaceForensics++ and Facebook Deepfake Detection Challenge.
Our results uncover multiple security limitations of the deepfake detection methods in adversarial settings.
arXiv Detail & Related papers (2021-07-05T14:18:21Z) - CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for
Combating Deepfakes [74.18502861399591]
Malicious application of deepfakes (i.e., technologies can generate target faces or face attributes) has posed a huge threat to our society.
We propose a universal adversarial attack method on deepfake models, to generate a Cross-Model Universal Adversarial Watermark (CMUA-Watermark)
Experimental results demonstrate that the proposed CMUA-Watermark can effectively distort the fake facial images generated by deepfake models.
arXiv Detail & Related papers (2021-05-23T07:28:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.