Landmark Breaker: Obstructing DeepFake By Disturbing Landmark Extraction
- URL: http://arxiv.org/abs/2102.00798v1
- Date: Mon, 1 Feb 2021 12:27:08 GMT
- Title: Landmark Breaker: Obstructing DeepFake By Disturbing Landmark Extraction
- Authors: Pu Sun, Yuezun Li, Honggang Qi and Siwei Lyu
- Abstract summary: We describe Landmark Breaker, the first dedicated method to disrupt facial landmark extraction.
Our motivation is that disrupting the facial landmark extraction can affect the alignment of input face so as to degrade the DeepFake quality.
Compared to the detection methods that only work after DeepFake generation, Landmark Breaker goes one step ahead to prevent DeepFake generation.
- Score: 40.71503677067645
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recent development of Deep Neural Networks (DNN) has significantly
increased the realism of AI-synthesized faces, with the most notable examples
being the DeepFakes. The DeepFake technology can synthesize a face of target
subject from a face of another subject, while retains the same face attributes.
With the rapidly increased social media portals (Facebook, Instagram, etc),
these realistic fake faces rapidly spread though the Internet, causing a broad
negative impact to the society. In this paper, we describe Landmark Breaker,
the first dedicated method to disrupt facial landmark extraction, and apply it
to the obstruction of the generation of DeepFake videos.Our motivation is that
disrupting the facial landmark extraction can affect the alignment of input
face so as to degrade the DeepFake quality. Our method is achieved using
adversarial perturbations. Compared to the detection methods that only work
after DeepFake generation, Landmark Breaker goes one step ahead to prevent
DeepFake generation. The experiments are conducted on three state-of-the-art
facial landmark extractors using the recent Celeb-DF dataset.
Related papers
- Deepfake detection in videos with multiple faces using geometric-fakeness features [79.16635054977068]
Deepfakes of victims or public figures can be used by fraudsters for blackmailing, extorsion and financial fraud.
In our research we propose to use geometric-fakeness features (GFF) that characterize a dynamic degree of a face presence in a video.
We employ our approach to analyze videos with multiple faces that are simultaneously present in a video.
arXiv Detail & Related papers (2024-10-10T13:10:34Z) - DiffusionFake: Enhancing Generalization in Deepfake Detection via Guided Stable Diffusion [94.46904504076124]
Deepfake technology has made face swapping highly realistic, raising concerns about the malicious use of fabricated facial content.
Existing methods often struggle to generalize to unseen domains due to the diverse nature of facial manipulations.
We introduce DiffusionFake, a novel framework that reverses the generative process of face forgeries to enhance the generalization of detection models.
arXiv Detail & Related papers (2024-10-06T06:22:43Z) - Active Fake: DeepFake Camouflage [11.976015496109525]
Face-Swap DeepFake fabricates behaviors by swapping original faces with synthesized ones.
Existing forensic methods, primarily based on Deep Neural Networks (DNNs), effectively expose these manipulations and have become important authenticity indicators.
We introduce a new framework for creating DeepFake camouflage that generates blending inconsistencies while ensuring imperceptibility, effectiveness, and transferability.
arXiv Detail & Related papers (2024-09-05T02:46:36Z) - FakeTracer: Catching Face-swap DeepFakes via Implanting Traces in Training [36.158715089667034]
Face-swap DeepFake is an emerging AI-based face forgery technique.
Due to the high privacy of faces, the misuse of this technique can raise severe social concerns.
We describe a new proactive defense method called FakeTracer to expose face-swap DeepFakes via implanting traces in training.
arXiv Detail & Related papers (2023-07-27T02:36:13Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - DeepFakes: Detecting Forged and Synthetic Media Content Using Machine
Learning [18.623444153774948]
The study presents challenges, research trends, and directions related to DeepFake creation and detection techniques.
The study reviews the notable research in the DeepFake domain to facilitate the development of more robust approaches that could deal with the more advance DeepFake in the future.
arXiv Detail & Related papers (2021-09-07T05:19:36Z) - End2End Occluded Face Recognition by Masking Corrupted Features [82.27588990277192]
State-of-the-art general face recognition models do not generalize well to occluded face images.
This paper presents a novel face recognition method that is robust to occlusions based on a single end-to-end deep neural network.
Our approach, named FROM (Face Recognition with Occlusion Masks), learns to discover the corrupted features from the deep convolutional neural networks, and clean them by the dynamically learned masks.
arXiv Detail & Related papers (2021-08-21T09:08:41Z) - CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for
Combating Deepfakes [74.18502861399591]
Malicious application of deepfakes (i.e., technologies can generate target faces or face attributes) has posed a huge threat to our society.
We propose a universal adversarial attack method on deepfake models, to generate a Cross-Model Universal Adversarial Watermark (CMUA-Watermark)
Experimental results demonstrate that the proposed CMUA-Watermark can effectively distort the fake facial images generated by deepfake models.
arXiv Detail & Related papers (2021-05-23T07:28:36Z) - Countering Malicious DeepFakes: Survey, Battleground, and Horizon [17.153920019319603]
The creation and the manipulation of facial appearance via deep generative approaches, known as DeepFake, have achieved significant progress.
The evil side of this new technique poses another popular study, i.e., DeepFake detection aiming to identify the fake faces from the real ones.
With the rapid development of the DeepFake-related studies in the community, both sides (i.e., DeepFake generation and detection) have formed the relationship of the battleground.
arXiv Detail & Related papers (2021-02-27T13:48:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.