SoK: Systematization and Benchmarking of Deepfake Detectors in a Unified Framework
- URL: http://arxiv.org/abs/2401.04364v4
- Date: Sun, 02 Mar 2025 02:32:25 GMT
- Title: SoK: Systematization and Benchmarking of Deepfake Detectors in a Unified Framework
- Authors: Binh M. Le, Jiwon Kim, Simon S. Woo, Kristen Moore, Alsharif Abuadbba, Shahroz Tariq,
- Abstract summary: This paper extensively reviews and analyzes state-of-the-art deepfake detectors, evaluating them against several critical criteria.<n>These criteria categorize detectors into 4 high-level groups and 13 finegrained sub-groups, aligned with a unified conceptual framework.<n>We evaluate the generalizability of 16 leading detectors across comprehensive attack scenarios, including black-box, white-box, and graybox settings.
- Score: 32.31180075214162
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deepfakes have rapidly emerged as a serious threat to society due to their ease of creation and dissemination, triggering the accelerated development of detection technologies. However, many existing detectors rely on labgenerated datasets for validation, which may not prepare them for novel, real-world deepfakes. This paper extensively reviews and analyzes state-of-the-art deepfake detectors, evaluating them against several critical criteria. These criteria categorize detectors into 4 high-level groups and 13 finegrained sub-groups, aligned with a unified conceptual framework we propose. This classification offers practical insights into the factors affecting detector efficacy. We evaluate the generalizability of 16 leading detectors across comprehensive attack scenarios, including black-box, white-box, and graybox settings. Our systematized analysis and experiments provide a deeper understanding of deepfake detectors and their generalizability, paving the way for future research and the development of more proactive defenses against deepfakes.
Related papers
- Benchmarking Fake Voice Detection in the Fake Voice Generation Arms Race [5.051497895059242]
Existing benchmarks aggregate diverse fake voice samples into a single dataset for evaluation.<n>This practice masks method-specific artifacts and obscures the varying performance of detectors against different generation paradigms.<n>We introduce the first ecosystem-level benchmark that systematically evaluates the interplay between 17 state-of-the-art fake voice generators and 8 leading detectors through a novel one-to-one evaluation protocol.
arXiv Detail & Related papers (2025-10-08T00:52:06Z) - Deepfake Media Generation and Detection in the Generative AI Era: A Survey and Outlook [101.30779332427217]
We survey deepfake generation and detection techniques, including the most recent developments in the field.
We identify various kinds of deepfakes, according to the procedure used to alter or generate the fake content.
We develop a novel multimodal benchmark to evaluate deepfake detectors on out-of-distribution content.
arXiv Detail & Related papers (2024-11-29T08:29:25Z) - A Survey and Evaluation of Adversarial Attacks for Object Detection [11.48212060875543]
Deep learning models are vulnerable to adversarial examples that can deceive them into making confident but incorrect predictions.
This vulnerability pose significant risks in high-stakes applications such as autonomous vehicles, security surveillance, and safety-critical inspection systems.
This paper presents a novel taxonomic framework for categorizing adversarial attacks specific to object detection architectures.
arXiv Detail & Related papers (2024-08-04T05:22:08Z) - DF40: Toward Next-Generation Deepfake Detection [62.073997142001424]
existing works identify top-notch detection algorithms and models by adhering to the common practice: training detectors on one specific dataset and testing them on other prevalent deepfake datasets.
But can these stand-out "winners" be truly applied to tackle the myriad of realistic and diverse deepfakes lurking in the real world?
We construct a highly diverse deepfake detection dataset called DF40, which comprises 40 distinct deepfake techniques.
arXiv Detail & Related papers (2024-06-19T12:35:02Z) - A Survey on Speech Deepfake Detection [7.3348524333159]
Speech Deepfakes pose a serious threat by generating realistic voices and spreading misinformation.<n>To combat this, numerous challenges have been organized to advance speech Deepfake detection techniques.<n>We systematically analyze more than 200 papers published up to March 2024.
arXiv Detail & Related papers (2024-04-22T06:52:12Z) - Real is not True: Backdoor Attacks Against Deepfake Detection [9.572726483706846]
We introduce a pioneering paradigm denominated as Bad-Deepfake, which represents a novel foray into the realm of backdoor attacks levied against deepfake detectors.
Our approach hinges upon the strategic manipulation of a subset of the training data, enabling us to wield disproportionate influence over the operational characteristics of a trained model.
arXiv Detail & Related papers (2024-03-11T10:57:14Z) - XAI-Based Detection of Adversarial Attacks on Deepfake Detectors [0.0]
We introduce a novel methodology for identifying adversarial attacks on deepfake detectors using XAI.
Our approach contributes not only to the detection of deepfakes but also enhances the understanding of possible adversarial attacks.
arXiv Detail & Related papers (2024-03-05T13:25:30Z) - Assaying on the Robustness of Zero-Shot Machine-Generated Text Detectors [57.7003399760813]
We explore advanced Large Language Models (LLMs) and their specialized variants, contributing to this field in several ways.
We uncover a significant correlation between topics and detection performance.
These investigations shed light on the adaptability and robustness of these detection methods across diverse topics.
arXiv Detail & Related papers (2023-12-20T10:53:53Z) - CrossDF: Improving Cross-Domain Deepfake Detection with Deep Information Decomposition [53.860796916196634]
We propose a Deep Information Decomposition (DID) framework to enhance the performance of Cross-dataset Deepfake Detection (CrossDF)
Unlike most existing deepfake detection methods, our framework prioritizes high-level semantic features over specific visual artifacts.
It adaptively decomposes facial features into deepfake-related and irrelevant information, only using the intrinsic deepfake-related information for real/fake discrimination.
arXiv Detail & Related papers (2023-09-30T12:30:25Z) - How Generalizable are Deepfake Image Detectors? An Empirical Study [4.42204674141385]
We present the first empirical study on the generalizability of deepfake detectors.
Our study utilizes six deepfake datasets, five deepfake image detection methods, and two model augmentation approaches.
We find that detectors are learning unwanted properties specific to synthesis methods and struggling to extract discriminative features.
arXiv Detail & Related papers (2023-08-08T10:30:34Z) - DeepfakeBench: A Comprehensive Benchmark of Deepfake Detection [55.70982767084996]
A critical yet frequently overlooked challenge in the field of deepfake detection is the lack of a standardized, unified, comprehensive benchmark.
We present the first comprehensive benchmark for deepfake detection, called DeepfakeBench, which offers three key contributions.
DeepfakeBench contains 15 state-of-the-art detection methods, 9CL datasets, a series of deepfake detection evaluation protocols and analysis tools, as well as comprehensive evaluations.
arXiv Detail & Related papers (2023-07-04T01:34:41Z) - Why Do Facial Deepfake Detectors Fail? [9.60306700003662]
Recent advancements in deepfake technology have allowed the creation of highly realistic fake media, such as video, image, and audio.
These materials pose significant challenges to human authentication, such as impersonation, misinformation, or even a threat to national security.
Several deepfake detection algorithms have been proposed, leading to an ongoing arms race between deepfake creators and deepfake detectors.
arXiv Detail & Related papers (2023-02-25T20:54:02Z) - A Continual Deepfake Detection Benchmark: Dataset, Methods, and
Essentials [97.69553832500547]
This paper suggests a continual deepfake detection benchmark (CDDB) over a new collection of deepfakes from both known and unknown generative models.
We exploit multiple approaches to adapt multiclass incremental learning methods, commonly used in the continual visual recognition, to the continual deepfake detection problem.
arXiv Detail & Related papers (2022-05-11T13:07:19Z) - Impact of Benign Modifications on Discriminative Performance of Deepfake
Detectors [11.881119750753648]
A large number of deepfake detectors have been proposed recently in order to identify such content.
Deepfakes are increasingly popular in both good faith applications such as in entertainment and maliciously intended manipulations such as in image and video forgery.
This paper proposes a more rigorous and systematic framework to assess the performance of deepfake detectors in more realistic situations.
arXiv Detail & Related papers (2021-11-14T22:50:39Z) - Adversarially Robust One-class Novelty Detection [83.1570537254877]
We show that existing novelty detectors are susceptible to adversarial examples.
We propose a defense strategy that manipulates the latent space of novelty detectors to improve the robustness against adversarial examples.
arXiv Detail & Related papers (2021-08-25T10:41:29Z) - Relevance Attack on Detectors [24.318876747711055]
This paper focuses on high-transferable adversarial attacks on detectors, which are hard to attack in a black-box manner.
We are the first to suggest that the relevance map from interpreters for detectors is such a property.
Based on it, we design a Relevance Attack on Detectors (RAD), which achieves a state-of-the-art transferability.
arXiv Detail & Related papers (2020-08-16T02:44:25Z) - Understanding Object Detection Through An Adversarial Lens [14.976840260248913]
This paper presents a framework for analyzing and evaluating vulnerabilities of deep object detectors under an adversarial lens.
We demonstrate that the proposed framework can serve as a methodical benchmark for analyzing adversarial behaviors and risks in real-time object detection systems.
We conjecture that this framework can also serve as a tool to assess the security risks and the adversarial robustness of deep object detectors to be deployed in real-world applications.
arXiv Detail & Related papers (2020-07-11T18:41:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.