DFIL: Deepfake Incremental Learning by Exploiting Domain-invariant
Forgery Clues
- URL: http://arxiv.org/abs/2309.09526v1
- Date: Mon, 18 Sep 2023 07:02:26 GMT
- Title: DFIL: Deepfake Incremental Learning by Exploiting Domain-invariant
Forgery Clues
- Authors: Kun Pan, Yin Yifang, Yao Wei, Feng Lin, Zhongjie Ba, Zhenguang Liu,
ZhiBo Wang, Lorenzo Cavallaro and Kui Ren
- Abstract summary: Current deepfake detection models can generally recognize forgery images by training on a large dataset.
The accuracy of detection models degrades significantly on images generated by new deepfake methods due to the difference in data distribution.
We present a novel incremental learning framework that improves the generalization of deepfake detection models.
- Score: 32.045504965382015
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The malicious use and widespread dissemination of deepfake pose a significant
crisis of trust. Current deepfake detection models can generally recognize
forgery images by training on a large dataset. However, the accuracy of
detection models degrades significantly on images generated by new deepfake
methods due to the difference in data distribution. To tackle this issue, we
present a novel incremental learning framework that improves the generalization
of deepfake detection models by continual learning from a small number of new
samples. To cope with different data distributions, we propose to learn a
domain-invariant representation based on supervised contrastive learning,
preventing overfit to the insufficient new data. To mitigate catastrophic
forgetting, we regularize our model in both feature-level and label-level based
on a multi-perspective knowledge distillation approach. Finally, we propose to
select both central and hard representative samples to update the replay set,
which is beneficial for both domain-invariant representation learning and
rehearsal-based knowledge preserving. We conduct extensive experiments on four
benchmark datasets, obtaining the new state-of-the-art average forgetting rate
of 7.01 and average accuracy of 85.49 on FF++, DFDC-P, DFD, and CDF2. Our code
is released at https://github.com/DeepFakeIL/DFIL.
Related papers
- Open-Set Deepfake Detection: A Parameter-Efficient Adaptation Method with Forgery Style Mixture [58.60915132222421]
We introduce an approach that is both general and parameter-efficient for face forgery detection.
We design a forgery-style mixture formulation that augments the diversity of forgery source domains.
We show that the designed model achieves state-of-the-art generalizability with significantly reduced trainable parameters.
arXiv Detail & Related papers (2024-08-23T01:53:36Z) - AntifakePrompt: Prompt-Tuned Vision-Language Models are Fake Image Detectors [24.78672820633581]
Deep generative models can create remarkably fake images while raising concerns about misinformation and copyright infringement.
Deepfake detection technique is developed to distinguish between real and fake images.
We propose a novel approach called AntifakePrompt, using Vision-Language Models and prompt tuning techniques.
arXiv Detail & Related papers (2023-10-26T14:23:45Z) - DiffusionEngine: Diffusion Model is Scalable Data Engine for Object
Detection [41.436817746749384]
Diffusion Model is a scalable data engine for object detection.
DiffusionEngine (DE) provides high-quality detection-oriented training pairs in a single stage.
arXiv Detail & Related papers (2023-09-07T17:55:01Z) - Domain Adaptive Synapse Detection with Weak Point Annotations [63.97144211520869]
We present AdaSyn, a framework for domain adaptive synapse detection with weak point annotations.
In the WASPSYN challenge at I SBI 2023, our method ranks the 1st place.
arXiv Detail & Related papers (2023-08-31T05:05:53Z) - Deep Convolutional Pooling Transformer for Deepfake Detection [54.10864860009834]
We propose a deep convolutional Transformer to incorporate decisive image features both locally and globally.
Specifically, we apply convolutional pooling and re-attention to enrich the extracted features and enhance efficacy.
The proposed solution consistently outperforms several state-of-the-art baselines on both within- and cross-dataset experiments.
arXiv Detail & Related papers (2022-09-12T15:05:41Z) - Weakly Supervised Change Detection Using Guided Anisotropic Difusion [97.43170678509478]
We propose original ideas that help us to leverage such datasets in the context of change detection.
First, we propose the guided anisotropic diffusion (GAD) algorithm, which improves semantic segmentation results.
We then show its potential in two weakly-supervised learning strategies tailored for change detection.
arXiv Detail & Related papers (2021-12-31T10:03:47Z) - X-model: Improving Data Efficiency in Deep Learning with A Minimax Model [78.55482897452417]
We aim at improving data efficiency for both classification and regression setups in deep learning.
To take the power of both worlds, we propose a novel X-model.
X-model plays a minimax game between the feature extractor and task-specific heads.
arXiv Detail & Related papers (2021-10-09T13:56:48Z) - FReTAL: Generalizing Deepfake Detection using Knowledge Distillation and
Representation Learning [17.97648576135166]
We introduce a transfer learning-based Feature Representation Transfer Adaptation Learning (FReTAL) method.
Our student model can quickly adapt to new types of deepfake by distilling knowledge from a pre-trained teacher model.
FReTAL outperforms all baselines on the domain adaptation task with up to 86.97% accuracy on low-quality deepfakes.
arXiv Detail & Related papers (2021-05-28T06:54:10Z) - CutPaste: Self-Supervised Learning for Anomaly Detection and
Localization [59.719925639875036]
We propose a framework for building anomaly detectors using normal training data only.
We first learn self-supervised deep representations and then build a generative one-class classifier on learned representations.
Our empirical study on MVTec anomaly detection dataset demonstrates the proposed algorithm is general to be able to detect various types of real-world defects.
arXiv Detail & Related papers (2021-04-08T19:04:55Z) - Learning to Recognize Patch-Wise Consistency for Deepfake Detection [39.186451993950044]
We propose a representation learning approach for this task, called patch-wise consistency learning (PCL)
PCL learns by measuring the consistency of image source features, resulting to representation with good interpretability and robustness to multiple forgery methods.
We evaluate our approach on seven popular Deepfake detection datasets.
arXiv Detail & Related papers (2020-12-16T23:06:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.