FakeTracer: Catching Face-swap DeepFakes via Implanting Traces in Training
- URL: http://arxiv.org/abs/2307.14593v2
- Date: Sun, 21 Apr 2024 09:02:36 GMT
- Title: FakeTracer: Catching Face-swap DeepFakes via Implanting Traces in Training
- Authors: Pu Sun, Honggang Qi, Yuezun Li, Siwei Lyu,
- Abstract summary: Face-swap DeepFake is an emerging AI-based face forgery technique.
Due to the high privacy of faces, the misuse of this technique can raise severe social concerns.
We describe a new proactive defense method called FakeTracer to expose face-swap DeepFakes via implanting traces in training.
- Score: 36.158715089667034
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face-swap DeepFake is an emerging AI-based face forgery technique that can replace the original face in a video with a generated face of the target identity while retaining consistent facial attributes such as expression and orientation. Due to the high privacy of faces, the misuse of this technique can raise severe social concerns, drawing tremendous attention to defend against DeepFakes recently. In this paper, we describe a new proactive defense method called FakeTracer to expose face-swap DeepFakes via implanting traces in training. Compared to general face-synthesis DeepFake, the face-swap DeepFake is more complex as it involves identity change, is subjected to the encoding-decoding process, and is trained unsupervised, increasing the difficulty of implanting traces into the training phase. To effectively defend against face-swap DeepFake, we design two types of traces, sustainable trace (STrace) and erasable trace (ETrace), to be added to training faces. During the training, these manipulated faces affect the learning of the face-swap DeepFake model, enabling it to generate faces that only contain sustainable traces. In light of these two traces, our method can effectively expose DeepFakes by identifying them. Extensive experiments corroborate the efficacy of our method on defending against face-swap DeepFake.
Related papers
- OSDFace: One-Step Diffusion Model for Face Restoration [72.5045389847792]
Diffusion models have demonstrated impressive performance in face restoration.
We propose OSDFace, a novel one-step diffusion model for face restoration.
Results demonstrate that OSDFace surpasses current state-of-the-art (SOTA) methods in both visual quality and quantitative metrics.
arXiv Detail & Related papers (2024-11-26T07:07:48Z) - Deepfake detection in videos with multiple faces using geometric-fakeness features [79.16635054977068]
Deepfakes of victims or public figures can be used by fraudsters for blackmailing, extorsion and financial fraud.
In our research we propose to use geometric-fakeness features (GFF) that characterize a dynamic degree of a face presence in a video.
We employ our approach to analyze videos with multiple faces that are simultaneously present in a video.
arXiv Detail & Related papers (2024-10-10T13:10:34Z) - ReliableSwap: Boosting General Face Swapping Via Reliable Supervision [9.725105108879717]
This paper proposes to construct reliable supervision, dubbed cycle triplets, which serves as the image-level guidance when the source identity differs from the target one during training.
Specifically, we use face reenactment and blending techniques to synthesize the swapped face from real images in advance.
Our face swapping framework, named ReliableSwap, can boost the performance of any existing face swapping network with negligible overhead.
arXiv Detail & Related papers (2023-06-08T17:01:14Z) - Deepfake Face Traceability with Disentangling Reversing Network [40.579533545888516]
Deepfake face not only violates the privacy of personal identity, but also confuses the public and causes huge social harm.
Current deepfake detection only stays at the level of distinguishing true and false, and cannot trace the original genuine face corresponding to the fake face.
This paper pioneers an interesting question about face deepfake, active forensics that "know it and how it happened"
arXiv Detail & Related papers (2022-07-08T03:05:28Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - Generating Master Faces for Dictionary Attacks with a Network-Assisted
Latent Space Evolution [68.8204255655161]
A master face is a face image that passes face-based identity-authentication for a large portion of the population.
We optimize these faces, by using an evolutionary algorithm in the latent embedding space of the StyleGAN face generator.
arXiv Detail & Related papers (2021-08-01T12:55:23Z) - One Shot Face Swapping on Megapixels [65.47443090320955]
This paper proposes the first Megapixel level method for one shot Face Swapping (or MegaFS for short)
Complete face representation, stable training, and limited memory usage are the three novel contributions to the success of our method.
arXiv Detail & Related papers (2021-05-11T10:41:47Z) - Landmark Breaker: Obstructing DeepFake By Disturbing Landmark Extraction [40.71503677067645]
We describe Landmark Breaker, the first dedicated method to disrupt facial landmark extraction.
Our motivation is that disrupting the facial landmark extraction can affect the alignment of input face so as to degrade the DeepFake quality.
Compared to the detection methods that only work after DeepFake generation, Landmark Breaker goes one step ahead to prevent DeepFake generation.
arXiv Detail & Related papers (2021-02-01T12:27:08Z) - Defending against GAN-based Deepfake Attacks via Transformation-aware
Adversarial Faces [36.87244915810356]
Deepfake represents a category of face-swapping attacks that leverage machine learning models.
We propose to use novel transformation-aware adversarially perturbed faces as a defense against Deepfake attacks.
We also propose to use an ensemble-based approach to enhance the defense robustness against GAN-based Deepfake variants.
arXiv Detail & Related papers (2020-06-12T18:51:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.