SeeABLE: Soft Discrepancies and Bounded Contrastive Learning for
Exposing Deepfakes
- URL: http://arxiv.org/abs/2211.11296v2
- Date: Sun, 1 Oct 2023 23:22:50 GMT
- Title: SeeABLE: Soft Discrepancies and Bounded Contrastive Learning for
Exposing Deepfakes
- Authors: Nicolas Larue, Ngoc-Son Vu, Vitomir Struc, Peter Peer, Vassilis
Christophides
- Abstract summary: We propose a novel deepfake detector, called SeeABLE, that formalizes the detection problem as a (one-class) out-of-distribution detection task.
SeeABLE pushes perturbed faces towards predefined prototypes using a novel regression-based bounded contrastive loss.
We show that our model convincingly outperforms competing state-of-the-art detectors, while exhibiting highly encouraging generalization capabilities.
- Score: 7.553507857251396
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern deepfake detectors have achieved encouraging results, when training
and test images are drawn from the same data collection. However, when these
detectors are applied to images produced with unknown deepfake-generation
techniques, considerable performance degradations are commonly observed. In
this paper, we propose a novel deepfake detector, called SeeABLE, that
formalizes the detection problem as a (one-class) out-of-distribution detection
task and generalizes better to unseen deepfakes. Specifically, SeeABLE first
generates local image perturbations (referred to as soft-discrepancies) and
then pushes the perturbed faces towards predefined prototypes using a novel
regression-based bounded contrastive loss. To strengthen the generalization
performance of SeeABLE to unknown deepfake types, we generate a rich set of
soft discrepancies and train the detector: (i) to localize, which part of the
face was modified, and (ii) to identify the alteration type. To demonstrate the
capabilities of SeeABLE, we perform rigorous experiments on several widely-used
deepfake datasets and show that our model convincingly outperforms competing
state-of-the-art detectors, while exhibiting highly encouraging generalization
capabilities.
Related papers
- FakeInversion: Learning to Detect Images from Unseen Text-to-Image Models by Inverting Stable Diffusion [18.829659846356765]
We propose a new synthetic image detector that uses features obtained by inverting an open-source pre-trained Stable Diffusion model.
We show that these inversion features enable our detector to generalize well to unseen generators of high visual fidelity.
We introduce a new challenging evaluation protocol that uses reverse image search to mitigate stylistic and thematic biases in the detector evaluation.
arXiv Detail & Related papers (2024-06-12T19:14:58Z) - Masked Conditional Diffusion Model for Enhancing Deepfake Detection [20.018495944984355]
We propose a Masked Conditional Diffusion Model (MCDM) for enhancing deepfake detection.
It generates a variety of forged faces from a masked pristine one, encouraging the deepfake detection model to learn generic and robust representations.
arXiv Detail & Related papers (2024-02-01T12:06:55Z) - DeepFidelity: Perceptual Forgery Fidelity Assessment for Deepfake
Detection [67.3143177137102]
Deepfake detection refers to detecting artificially generated or edited faces in images or videos.
We propose a novel Deepfake detection framework named DeepFidelity to adaptively distinguish real and fake faces.
arXiv Detail & Related papers (2023-12-07T07:19:45Z) - Improving Cross-dataset Deepfake Detection with Deep Information
Decomposition [57.284370468207214]
Deepfake technology poses a significant threat to security and social trust.
Existing detection methods suffer from sharp performance degradation when faced with cross-dataset scenarios.
We propose a deep information decomposition (DID) framework in this paper.
arXiv Detail & Related papers (2023-09-30T12:30:25Z) - Towards General Visual-Linguistic Face Forgery Detection [95.73987327101143]
Deepfakes are realistic face manipulations that can pose serious threats to security, privacy, and trust.
Existing methods mostly treat this task as binary classification, which uses digital labels or mask signals to train the detection model.
We propose a novel paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses fine-grained sentence-level prompts as the annotation.
arXiv Detail & Related papers (2023-07-31T10:22:33Z) - Attention Consistency Refined Masked Frequency Forgery Representation
for Generalizing Face Forgery Detection [96.539862328788]
Existing forgery detection methods suffer from unsatisfactory generalization ability to determine the authenticity in the unseen domain.
We propose a novel Attention Consistency Refined masked frequency forgery representation model toward generalizing face forgery detection algorithm (ACMF)
Experiment results on several public face forgery datasets demonstrate the superior performance of the proposed method compared with the state-of-the-art methods.
arXiv Detail & Related papers (2023-07-21T08:58:49Z) - On the Vulnerability of DeepFake Detectors to Attacks Generated by
Denoising Diffusion Models [0.5827521884806072]
We investigate the vulnerability of single-image deepfake detectors to black-box attacks created by the newest generation of generative methods.
Our experiments are run on FaceForensics++, a widely used deepfake benchmark consisting of manipulated images.
Our findings indicate that employing just a single denoising diffusion step in the reconstruction process of a deepfake can significantly reduce the likelihood of detection.
arXiv Detail & Related papers (2023-07-11T15:57:51Z) - Towards A Robust Deepfake Detector:Common Artifact Deepfake Detection
Model [14.308886041268973]
We propose a novel deepfake detection method named Common Artifact Deepfake Detection Model.
We find that the main obstacle to learning common artifact features is that models are easily misled by the identity representation feature.
Our method effectively reduces the influence of Implicit Identity Leakage and outperforms the state-of-the-art by a large margin.
arXiv Detail & Related papers (2022-10-26T04:02:29Z) - FrePGAN: Robust Deepfake Detection Using Frequency-level Perturbations [12.027711542565315]
We design a framework to generalize the deepfake detector for both the known and unseen GAN models.
Our framework generates the frequency-level perturbation maps to make the generated images indistinguishable from the real images.
For experiments, we design new test scenarios varying from the training settings in GAN models, color manipulations, and object categories.
arXiv Detail & Related papers (2022-02-07T16:45:11Z) - Beyond the Spectrum: Detecting Deepfakes via Re-Synthesis [69.09526348527203]
Deep generative models have led to highly realistic media, known as deepfakes, that are commonly indistinguishable from real to human eyes.
We propose a novel fake detection that is designed to re-synthesize testing images and extract visual cues for detection.
We demonstrate the improved effectiveness, cross-GAN generalization, and robustness against perturbations of our approach in a variety of detection scenarios.
arXiv Detail & Related papers (2021-05-29T21:22:24Z) - Artificial Fingerprinting for Generative Models: Rooting Deepfake
Attribution in Training Data [64.65952078807086]
Photorealistic image generation has reached a new level of quality due to the breakthroughs of generative adversarial networks (GANs)
Yet, the dark side of such deepfakes, the malicious use of generated media, raises concerns about visual misinformation.
We seek a proactive and sustainable solution on deepfake detection by introducing artificial fingerprints into the models.
arXiv Detail & Related papers (2020-07-16T16:49:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.