Contrastive Pseudo Learning for Open-World DeepFake Attribution
- URL: http://arxiv.org/abs/2309.11132v1
- Date: Wed, 20 Sep 2023 08:29:22 GMT
- Title: Contrastive Pseudo Learning for Open-World DeepFake Attribution
- Authors: Zhimin Sun, Shen Chen, Taiping Yao, Bangjie Yin, Ran Yi, Shouhong
Ding, Lizhuang Ma
- Abstract summary: We introduce a new benchmark called Open-World DeepFake (OW-DFA), which aims to evaluate attribution performance against various types of fake faces under open-world scenarios.
We propose a novel framework named Contrastive Pseudo Learning (CPL) for the OW-DFA task through 1) introducing a Global-Local Voting module to guide the feature alignment of forged faces with different manipulated regions, 2) designing a Confidence-based Soft Pseudo-label strategy to mitigate the pseudo-noise caused by similar methods in unlabeled set.
- Score: 67.58954345538547
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The challenge in sourcing attribution for forgery faces has gained widespread
attention due to the rapid development of generative techniques. While many
recent works have taken essential steps on GAN-generated faces, more
threatening attacks related to identity swapping or expression transferring are
still overlooked. And the forgery traces hidden in unknown attacks from the
open-world unlabeled faces still remain under-explored. To push the related
frontier research, we introduce a new benchmark called Open-World DeepFake
Attribution (OW-DFA), which aims to evaluate attribution performance against
various types of fake faces under open-world scenarios. Meanwhile, we propose a
novel framework named Contrastive Pseudo Learning (CPL) for the OW-DFA task
through 1) introducing a Global-Local Voting module to guide the feature
alignment of forged faces with different manipulated regions, 2) designing a
Confidence-based Soft Pseudo-label strategy to mitigate the pseudo-noise caused
by similar methods in unlabeled set. In addition, we extend the CPL framework
with a multi-stage paradigm that leverages pre-train technique and iterative
learning to further enhance traceability performance. Extensive experiments
verify the superiority of our proposed method on the OW-DFA and also
demonstrate the interpretability of deepfake attribution task and its impact on
improving the security of deepfake detection area.
Related papers
- Fake It till You Make It: Curricular Dynamic Forgery Augmentations towards General Deepfake Detection [15.857961926916465]
We present a novel general deepfake detection method, called textbfCurricular textbfDynamic textbfForgery textbfAugmentation (CDFA)
CDFA jointly trains a deepfake detector with a forgery augmentation policy network.
We show that CDFA can significantly improve both cross-datasets and cross-manipulations performances of various naive deepfake detectors.
arXiv Detail & Related papers (2024-09-22T13:51:22Z) - Semantics-Oriented Multitask Learning for DeepFake Detection: A Joint Embedding Approach [77.65459419417533]
We propose an automatic dataset expansion technique to support semantics-oriented DeepFake detection tasks.
We also resort to joint embedding of face images and their corresponding labels for prediction.
Our method improves the generalizability of DeepFake detection and renders some degree of model interpretation by providing human-understandable explanations.
arXiv Detail & Related papers (2024-08-29T07:11:50Z) - UniForensics: Face Forgery Detection via General Facial Representation [60.5421627990707]
High-level semantic features are less susceptible to perturbations and not limited to forgery-specific artifacts, thus having stronger generalization.
We introduce UniForensics, a novel deepfake detection framework that leverages a transformer-based video network, with a meta-functional face classification for enriched facial representation.
arXiv Detail & Related papers (2024-07-26T20:51:54Z) - Towards More General Video-based Deepfake Detection through Facial Feature Guided Adaptation for Foundation Model [15.61920157541529]
We propose a novel Deepfake detection approach by adapting the Foundation Models with rich information encoded inside.
Inspired by the recent advances of parameter efficient fine-tuning, we propose a novel side-network-based decoder.
Our approach exhibits superior effectiveness in identifying unseen Deepfake samples, achieving notable performance improvement.
arXiv Detail & Related papers (2024-04-08T14:58:52Z) - Towards General Visual-Linguistic Face Forgery Detection [95.73987327101143]
Deepfakes are realistic face manipulations that can pose serious threats to security, privacy, and trust.
Existing methods mostly treat this task as binary classification, which uses digital labels or mask signals to train the detection model.
We propose a novel paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses fine-grained sentence-level prompts as the annotation.
arXiv Detail & Related papers (2023-07-31T10:22:33Z) - Dual Contrastive Learning for General Face Forgery Detection [64.41970626226221]
We propose a novel face forgery detection framework, named Dual Contrastive Learning (DCL), which constructs positive and negative paired data.
To explore the essential discrepancies, Intra-Instance Contrastive Learning (Intra-ICL) is introduced to focus on the local content inconsistencies prevalent in the forged faces.
arXiv Detail & Related papers (2021-12-27T05:44:40Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.