SeeABLE: Soft Discrepancies and Bounded Contrastive Learning for
Exposing Deepfakes
- URL: http://arxiv.org/abs/2211.11296v2
- Date: Sun, 1 Oct 2023 23:22:50 GMT
- Title: SeeABLE: Soft Discrepancies and Bounded Contrastive Learning for
Exposing Deepfakes
- Authors: Nicolas Larue, Ngoc-Son Vu, Vitomir Struc, Peter Peer, Vassilis
Christophides
- Abstract summary: We propose a novel deepfake detector, called SeeABLE, that formalizes the detection problem as a (one-class) out-of-distribution detection task.
SeeABLE pushes perturbed faces towards predefined prototypes using a novel regression-based bounded contrastive loss.
We show that our model convincingly outperforms competing state-of-the-art detectors, while exhibiting highly encouraging generalization capabilities.
- Score: 7.553507857251396
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern deepfake detectors have achieved encouraging results, when training
and test images are drawn from the same data collection. However, when these
detectors are applied to images produced with unknown deepfake-generation
techniques, considerable performance degradations are commonly observed. In
this paper, we propose a novel deepfake detector, called SeeABLE, that
formalizes the detection problem as a (one-class) out-of-distribution detection
task and generalizes better to unseen deepfakes. Specifically, SeeABLE first
generates local image perturbations (referred to as soft-discrepancies) and
then pushes the perturbed faces towards predefined prototypes using a novel
regression-based bounded contrastive loss. To strengthen the generalization
performance of SeeABLE to unknown deepfake types, we generate a rich set of
soft discrepancies and train the detector: (i) to localize, which part of the
face was modified, and (ii) to identify the alteration type. To demonstrate the
capabilities of SeeABLE, we perform rigorous experiments on several widely-used
deepfake datasets and show that our model convincingly outperforms competing
state-of-the-art detectors, while exhibiting highly encouraging generalization
capabilities.
Related papers
- UniForensics: Face Forgery Detection via General Facial Representation [60.5421627990707]
High-level semantic features are less susceptible to perturbations and not limited to forgery-specific artifacts, thus having stronger generalization.
We introduce UniForensics, a novel deepfake detection framework that leverages a transformer-based video network, with a meta-functional face classification for enriched facial representation.
arXiv Detail & Related papers (2024-07-26T20:51:54Z) - FakeInversion: Learning to Detect Images from Unseen Text-to-Image Models by Inverting Stable Diffusion [18.829659846356765]
We propose a new synthetic image detector that uses features obtained by inverting an open-source pre-trained Stable Diffusion model.
We show that these inversion features enable our detector to generalize well to unseen generators of high visual fidelity.
We introduce a new challenging evaluation protocol that uses reverse image search to mitigate stylistic and thematic biases in the detector evaluation.
arXiv Detail & Related papers (2024-06-12T19:14:58Z) - AntifakePrompt: Prompt-Tuned Vision-Language Models are Fake Image Detectors [24.78672820633581]
Deep generative models can create remarkably fake images while raising concerns about misinformation and copyright infringement.
Deepfake detection technique is developed to distinguish between real and fake images.
We propose a novel approach called AntifakePrompt, using Vision-Language Models and prompt tuning techniques.
arXiv Detail & Related papers (2023-10-26T14:23:45Z) - CrossDF: Improving Cross-Domain Deepfake Detection with Deep Information Decomposition [53.860796916196634]
We propose a Deep Information Decomposition (DID) framework to enhance the performance of Cross-dataset Deepfake Detection (CrossDF)
Unlike most existing deepfake detection methods, our framework prioritizes high-level semantic features over specific visual artifacts.
It adaptively decomposes facial features into deepfake-related and irrelevant information, only using the intrinsic deepfake-related information for real/fake discrimination.
arXiv Detail & Related papers (2023-09-30T12:30:25Z) - Towards General Visual-Linguistic Face Forgery Detection [95.73987327101143]
Deepfakes are realistic face manipulations that can pose serious threats to security, privacy, and trust.
Existing methods mostly treat this task as binary classification, which uses digital labels or mask signals to train the detection model.
We propose a novel paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses fine-grained sentence-level prompts as the annotation.
arXiv Detail & Related papers (2023-07-31T10:22:33Z) - Attention Consistency Refined Masked Frequency Forgery Representation
for Generalizing Face Forgery Detection [96.539862328788]
Existing forgery detection methods suffer from unsatisfactory generalization ability to determine the authenticity in the unseen domain.
We propose a novel Attention Consistency Refined masked frequency forgery representation model toward generalizing face forgery detection algorithm (ACMF)
Experiment results on several public face forgery datasets demonstrate the superior performance of the proposed method compared with the state-of-the-art methods.
arXiv Detail & Related papers (2023-07-21T08:58:49Z) - On the Vulnerability of DeepFake Detectors to Attacks Generated by
Denoising Diffusion Models [0.5827521884806072]
We investigate the vulnerability of single-image deepfake detectors to black-box attacks created by the newest generation of generative methods.
Our experiments are run on FaceForensics++, a widely used deepfake benchmark consisting of manipulated images.
Our findings indicate that employing just a single denoising diffusion step in the reconstruction process of a deepfake can significantly reduce the likelihood of detection.
arXiv Detail & Related papers (2023-07-11T15:57:51Z) - Towards A Robust Deepfake Detector:Common Artifact Deepfake Detection
Model [14.308886041268973]
We propose a novel deepfake detection method named Common Artifact Deepfake Detection Model.
We find that the main obstacle to learning common artifact features is that models are easily misled by the identity representation feature.
Our method effectively reduces the influence of Implicit Identity Leakage and outperforms the state-of-the-art by a large margin.
arXiv Detail & Related papers (2022-10-26T04:02:29Z) - FrePGAN: Robust Deepfake Detection Using Frequency-level Perturbations [12.027711542565315]
We design a framework to generalize the deepfake detector for both the known and unseen GAN models.
Our framework generates the frequency-level perturbation maps to make the generated images indistinguishable from the real images.
For experiments, we design new test scenarios varying from the training settings in GAN models, color manipulations, and object categories.
arXiv Detail & Related papers (2022-02-07T16:45:11Z) - Beyond the Spectrum: Detecting Deepfakes via Re-Synthesis [69.09526348527203]
Deep generative models have led to highly realistic media, known as deepfakes, that are commonly indistinguishable from real to human eyes.
We propose a novel fake detection that is designed to re-synthesize testing images and extract visual cues for detection.
We demonstrate the improved effectiveness, cross-GAN generalization, and robustness against perturbations of our approach in a variety of detection scenarios.
arXiv Detail & Related papers (2021-05-29T21:22:24Z) - Artificial Fingerprinting for Generative Models: Rooting Deepfake
Attribution in Training Data [64.65952078807086]
Photorealistic image generation has reached a new level of quality due to the breakthroughs of generative adversarial networks (GANs)
Yet, the dark side of such deepfakes, the malicious use of generated media, raises concerns about visual misinformation.
We seek a proactive and sustainable solution on deepfake detection by introducing artificial fingerprints into the models.
arXiv Detail & Related papers (2020-07-16T16:49:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.