Detecting Deepfakes with Self-Blended Images
- URL: http://arxiv.org/abs/2204.08376v1
- Date: Mon, 18 Apr 2022 15:44:35 GMT
- Title: Detecting Deepfakes with Self-Blended Images
- Authors: Kaede Shiohara and Toshihiko Yamasaki
- Abstract summary: We present novel synthetic training data called self-blended images ( SBIs) to detect deepfakes.
SBIs are generated by blending pseudo source and target images from single pristine images.
We compare our approach with state-of-the-art methods on FF++, CDF, DFD, DFDC, DFDCP, and FFIW datasets.
- Score: 37.374772758057844
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present novel synthetic training data called self-blended
images (SBIs) to detect deepfakes. SBIs are generated by blending pseudo source
and target images from single pristine images, reproducing common forgery
artifacts (e.g., blending boundaries and statistical inconsistencies between
source and target images). The key idea behind SBIs is that more general and
hardly recognizable fake samples encourage classifiers to learn generic and
robust representations without overfitting to manipulation-specific artifacts.
We compare our approach with state-of-the-art methods on FF++, CDF, DFD, DFDC,
DFDCP, and FFIW datasets by following the standard cross-dataset and
cross-manipulation protocols. Extensive experiments show that our method
improves the model generalization to unknown manipulations and scenes. In
particular, on DFDC and DFDCP where existing methods suffer from the domain gap
between the training and test sets, our approach outperforms the baseline by
4.90% and 11.78% points in the cross-dataset evaluation, respectively.
Related papers
- Contrasting Deepfakes Diffusion via Contrastive Learning and Global-Local Similarities [88.398085358514]
Contrastive Deepfake Embeddings (CoDE) is a novel embedding space specifically designed for deepfake detection.
CoDE is trained via contrastive learning by additionally enforcing global-local similarities.
arXiv Detail & Related papers (2024-07-29T18:00:10Z) - FSBI: Deepfakes Detection with Frequency Enhanced Self-Blended Images [17.707379977847026]
This paper introduces a Frequency Enhanced Self-Blended Images approach for deepfakes detection.
The proposed approach has been evaluated on FF++ and Celeb-DF datasets.
arXiv Detail & Related papers (2024-06-12T20:15:00Z) - CrossDF: Improving Cross-Domain Deepfake Detection with Deep Information Decomposition [53.860796916196634]
We propose a Deep Information Decomposition (DID) framework to enhance the performance of Cross-dataset Deepfake Detection (CrossDF)
Unlike most existing deepfake detection methods, our framework prioritizes high-level semantic features over specific visual artifacts.
It adaptively decomposes facial features into deepfake-related and irrelevant information, only using the intrinsic deepfake-related information for real/fake discrimination.
arXiv Detail & Related papers (2023-09-30T12:30:25Z) - Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images [60.34381768479834]
Recent advancements in diffusion models have enabled the generation of realistic deepfakes from textual prompts in natural language.
We pioneer a systematic study on deepfake detection generated by state-of-the-art diffusion models.
arXiv Detail & Related papers (2023-04-02T10:25:09Z) - Deep Convolutional Pooling Transformer for Deepfake Detection [54.10864860009834]
We propose a deep convolutional Transformer to incorporate decisive image features both locally and globally.
Specifically, we apply convolutional pooling and re-attention to enrich the extracted features and enhance efficacy.
The proposed solution consistently outperforms several state-of-the-art baselines on both within- and cross-dataset experiments.
arXiv Detail & Related papers (2022-09-12T15:05:41Z) - M2TR: Multi-modal Multi-scale Transformers for Deepfake Detection [74.19291916812921]
forged images generated by Deepfake techniques pose a serious threat to the trustworthiness of digital information.
In this paper, we aim to capture the subtle manipulation artifacts at different scales for Deepfake detection.
We introduce a high-quality Deepfake dataset, SR-DF, which consists of 4,000 DeepFake videos generated by state-of-the-art face swapping and facial reenactment methods.
arXiv Detail & Related papers (2021-04-20T05:43:44Z) - Identifying Invariant Texture Violation for Robust Deepfake Detection [17.306386179823576]
We propose the Invariant Texture Learning framework, which only accesses the published dataset with low visual quality.
Our method is based on the prior that the microscopic facial texture of the source face is inevitably violated by the texture transferred from the target person.
arXiv Detail & Related papers (2020-12-19T03:02:15Z) - Learning to Recognize Patch-Wise Consistency for Deepfake Detection [39.186451993950044]
We propose a representation learning approach for this task, called patch-wise consistency learning (PCL)
PCL learns by measuring the consistency of image source features, resulting to representation with good interpretability and robustness to multiple forgery methods.
We evaluate our approach on seven popular Deepfake detection datasets.
arXiv Detail & Related papers (2020-12-16T23:06:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.