DETER: Detecting Edited Regions for Deterring Generative Manipulations
- URL: http://arxiv.org/abs/2312.10539v1
- Date: Sat, 16 Dec 2023 20:38:02 GMT
- Title: DETER: Detecting Edited Regions for Deterring Generative Manipulations
- Authors: Sai Wang, Ye Zhu, Ruoyu Wang, Amaya Dharmasiri, Olga Russakovsky, Yu
Wu
- Abstract summary: We introduce DETER, a large-scale dataset for DETEcting edited image Regions.
Deter includes 300,000 images manipulated by four state-of-the-art generators with three editing operations.
Human studies confirm that human deep fake detection rate on DETER is 20.4% lower than on other fake datasets.
- Score: 31.85788472041527
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Generative AI capabilities have grown substantially in recent years, raising
renewed concerns about potential malicious use of generated data, or "deep
fakes". However, deep fake datasets have not kept up with generative AI
advancements sufficiently to enable the development of deep fake detection
technology which can meaningfully alert human users in real-world settings.
Existing datasets typically use GAN-based models and introduce spurious
correlations by always editing similar face regions. To counteract the
shortcomings, we introduce DETER, a large-scale dataset for DETEcting edited
image Regions and deterring modern advanced generative manipulations. DETER
includes 300,000 images manipulated by four state-of-the-art generators with
three editing operations: face swapping (a standard coarse image manipulation),
inpainting (a novel manipulation for deep fake datasets), and attribute editing
(a subtle fine-grained manipulation). While face swapping and attribute editing
are performed on similar face regions such as eyes and nose, the inpainting
operation can be performed on random image regions, removing the spurious
correlations of previous datasets. Careful image post-processing is performed
to ensure deep fakes in DETER look realistic, and human studies confirm that
human deep fake detection rate on DETER is 20.4% lower than on other fake
datasets. Equipped with the dataset, we conduct extensive experiments and
break-down analysis using our rich annotations and improved benchmark
protocols, revealing future directions and the next set of challenges in
developing reliable regional fake detection models.
Related papers
- GenFace: A Large-Scale Fine-Grained Face Forgery Benchmark and Cross Appearance-Edge Learning [50.7702397913573]
The rapid advancement of photorealistic generators has reached a critical juncture where the discrepancy between authentic and manipulated images is increasingly indistinguishable.
Although there have been a number of publicly available face forgery datasets, the forgery faces are mostly generated using GAN-based synthesis technology.
We propose a large-scale, diverse, and fine-grained high-fidelity dataset, namely GenFace, to facilitate the advancement of deepfake detection.
arXiv Detail & Related papers (2024-02-03T03:13:50Z) - DeepFidelity: Perceptual Forgery Fidelity Assessment for Deepfake
Detection [67.3143177137102]
Deepfake detection refers to detecting artificially generated or edited faces in images or videos.
We propose a novel Deepfake detection framework named DeepFidelity to adaptively distinguish real and fake faces.
arXiv Detail & Related papers (2023-12-07T07:19:45Z) - AntifakePrompt: Prompt-Tuned Vision-Language Models are Fake Image Detectors [24.78672820633581]
Deep generative models can create remarkably fake images while raising concerns about misinformation and copyright infringement.
Deepfake detection technique is developed to distinguish between real and fake images.
We propose a novel approach called AntifakePrompt, using Vision-Language Models and prompt tuning techniques.
arXiv Detail & Related papers (2023-10-26T14:23:45Z) - Real Face Foundation Representation Learning for Generalized Deepfake
Detection [74.4691295738097]
The emergence of deepfake technologies has become a matter of social concern as they pose threats to individual privacy and public security.
It is almost impossible to collect sufficient representative fake faces, and it is hard for existing detectors to generalize to all types of manipulation.
We propose Real Face Foundation Representation Learning (RFFR), which aims to learn a general representation from large-scale real face datasets.
arXiv Detail & Related papers (2023-03-15T08:27:56Z) - A Dataless FaceSwap Detection Approach Using Synthetic Images [5.73382615946951]
We propose a deepfake detection methodology that eliminates the need for any real data by making use of synthetically generated data using StyleGAN3.
This not only performs at par with the traditional training methodology of using real data but it shows better generalization capabilities when finetuned with a small amount of real data.
arXiv Detail & Related papers (2022-12-05T19:49:45Z) - SeeABLE: Soft Discrepancies and Bounded Contrastive Learning for
Exposing Deepfakes [7.553507857251396]
We propose a novel deepfake detector, called SeeABLE, that formalizes the detection problem as a (one-class) out-of-distribution detection task.
SeeABLE pushes perturbed faces towards predefined prototypes using a novel regression-based bounded contrastive loss.
We show that our model convincingly outperforms competing state-of-the-art detectors, while exhibiting highly encouraging generalization capabilities.
arXiv Detail & Related papers (2022-11-21T09:38:30Z) - Beyond the Spectrum: Detecting Deepfakes via Re-Synthesis [69.09526348527203]
Deep generative models have led to highly realistic media, known as deepfakes, that are commonly indistinguishable from real to human eyes.
We propose a novel fake detection that is designed to re-synthesize testing images and extract visual cues for detection.
We demonstrate the improved effectiveness, cross-GAN generalization, and robustness against perturbations of our approach in a variety of detection scenarios.
arXiv Detail & Related papers (2021-05-29T21:22:24Z) - M2TR: Multi-modal Multi-scale Transformers for Deepfake Detection [74.19291916812921]
forged images generated by Deepfake techniques pose a serious threat to the trustworthiness of digital information.
In this paper, we aim to capture the subtle manipulation artifacts at different scales for Deepfake detection.
We introduce a high-quality Deepfake dataset, SR-DF, which consists of 4,000 DeepFake videos generated by state-of-the-art face swapping and facial reenactment methods.
arXiv Detail & Related papers (2021-04-20T05:43:44Z) - Artificial Fingerprinting for Generative Models: Rooting Deepfake
Attribution in Training Data [64.65952078807086]
Photorealistic image generation has reached a new level of quality due to the breakthroughs of generative adversarial networks (GANs)
Yet, the dark side of such deepfakes, the malicious use of generated media, raises concerns about visual misinformation.
We seek a proactive and sustainable solution on deepfake detection by introducing artificial fingerprints into the models.
arXiv Detail & Related papers (2020-07-16T16:49:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.