Analyzing Fairness in Deepfake Detection With Massively Annotated
Databases
- URL: http://arxiv.org/abs/2208.05845v4
- Date: Mon, 11 Mar 2024 11:17:36 GMT
- Title: Analyzing Fairness in Deepfake Detection With Massively Annotated
Databases
- Authors: Ying Xu, Philipp Terh\"orst, Kiran Raja, Marius Pedersen
- Abstract summary: We investigate factors causing biased detection in public Deepfake datasets.
We create large-scale demographic and non-demographic annotations with 47 different attributes for five popular Deepfake datasets.
We analyse attributes resulting in AI-bias of three state-of-the-art Deepfake detection backbone models on these datasets.
- Score: 9.407035514709293
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In recent years, image and video manipulations with Deepfake have become a
severe concern for security and society. Many detection models and datasets
have been proposed to detect Deepfake data reliably. However, there is an
increased concern that these models and training databases might be biased and,
thus, cause Deepfake detectors to fail. In this work, we investigate factors
causing biased detection in public Deepfake datasets by (a) creating
large-scale demographic and non-demographic attribute annotations with 47
different attributes for five popular Deepfake datasets and (b) comprehensively
analysing attributes resulting in AI-bias of three state-of-the-art Deepfake
detection backbone models on these datasets. The analysis shows how various
attributes influence a large variety of distinctive attributes (from over 65M
labels) on the detection performance which includes demographic (age, gender,
ethnicity) and non-demographic (hair, skin, accessories, etc.) attributes. The
results examined datasets show limited diversity and, more importantly, show
that the utilised Deepfake detection backbone models are strongly affected by
investigated attributes making them not fair across attributes. The Deepfake
detection backbone methods trained on such imbalanced/biased datasets result in
incorrect detection results leading to generalisability, fairness, and security
issues. Our findings and annotated datasets will guide future research to
evaluate and mitigate bias in Deepfake detection techniques. The annotated
datasets and the corresponding code are publicly available.
Related papers
- Thinking Racial Bias in Fair Forgery Detection: Models, Datasets and Evaluations [63.52709761339949]
We first contribute a dedicated dataset called the Fair Forgery Detection (FairFD) dataset, where we prove the racial bias of public state-of-the-art (SOTA) methods.
We design novel metrics including Approach Averaged Metric and Utility Regularized Metric, which can avoid deceptive results.
We also present an effective and robust post-processing technique, Bias Pruning with Fair Activations (BPFA), which improves fairness without requiring retraining or weight updates.
arXiv Detail & Related papers (2024-07-19T14:53:18Z) - Bayesian Detector Combination for Object Detection with Crowdsourced Annotations [49.43709660948812]
Acquiring fine-grained object detection annotations in unconstrained images is time-consuming, expensive, and prone to noise.
We propose a novel Bayesian Detector Combination (BDC) framework to more effectively train object detectors with noisy crowdsourced annotations.
BDC is model-agnostic, requires no prior knowledge of the annotators' skill level, and seamlessly integrates with existing object detection models.
arXiv Detail & Related papers (2024-07-10T18:00:54Z) - Facial Forgery-based Deepfake Detection using Fine-Grained Features [7.378937711027777]
Facial forgery by deepfakes has caused major security risks and raised severe societal concerns.
We formulate deepfake detection as a fine-grained classification problem and propose a new fine-grained solution to it.
Our method is based on learning subtle and generalizable features by effectively suppressing background noise and learning discriminative features at various scales for deepfake detection.
arXiv Detail & Related papers (2023-10-10T21:30:05Z) - CrossDF: Improving Cross-Domain Deepfake Detection with Deep Information Decomposition [53.860796916196634]
We propose a Deep Information Decomposition (DID) framework to enhance the performance of Cross-dataset Deepfake Detection (CrossDF)
Unlike most existing deepfake detection methods, our framework prioritizes high-level semantic features over specific visual artifacts.
It adaptively decomposes facial features into deepfake-related and irrelevant information, only using the intrinsic deepfake-related information for real/fake discrimination.
arXiv Detail & Related papers (2023-09-30T12:30:25Z) - DeepfakeBench: A Comprehensive Benchmark of Deepfake Detection [55.70982767084996]
A critical yet frequently overlooked challenge in the field of deepfake detection is the lack of a standardized, unified, comprehensive benchmark.
We present the first comprehensive benchmark for deepfake detection, called DeepfakeBench, which offers three key contributions.
DeepfakeBench contains 15 state-of-the-art detection methods, 9CL datasets, a series of deepfake detection evaluation protocols and analysis tools, as well as comprehensive evaluations.
arXiv Detail & Related papers (2023-07-04T01:34:41Z) - Improving Fairness in Deepfake Detection [38.999205139257164]
biases in the data used to train deepfake detectors can lead to disparities in detection accuracy across different races and genders.
We propose novel loss functions that handle both the setting where demographic information is available as well as the case where this information is absent.
arXiv Detail & Related papers (2023-06-29T02:19:49Z) - Data AUDIT: Identifying Attribute Utility- and Detectability-Induced
Bias in Task Models [8.420252576694583]
We present a first technique for the rigorous, quantitative screening of medical image datasets.
Our method decomposes the risks associated with dataset attributes in terms of their detectability and utility.
Using our method, we show our screening method reliably identifies nearly imperceptible bias-inducing artifacts.
arXiv Detail & Related papers (2023-04-06T16:50:15Z) - Metamorphic Testing-based Adversarial Attack to Fool Deepfake Detectors [2.0649235321315285]
There is a dire need for deepfake detection technology to help spot deepfake media.
Current deepfake detection models are able to achieve outstanding accuracy (>90%)
This study identifies makeup application as an adversarial attack that could fool deepfake detectors.
arXiv Detail & Related papers (2022-04-19T02:24:30Z) - Voice-Face Homogeneity Tells Deepfake [56.334968246631725]
Existing detection approaches contribute to exploring the specific artifacts in deepfake videos.
We propose to perform the deepfake detection from an unexplored voice-face matching view.
Our model obtains significantly improved performance as compared to other state-of-the-art competitors.
arXiv Detail & Related papers (2022-03-04T09:08:50Z) - Stance Detection Benchmark: How Robust Is Your Stance Detection? [65.91772010586605]
Stance Detection (StD) aims to detect an author's stance towards a certain topic or claim.
We introduce a StD benchmark that learns from ten StD datasets of various domains in a multi-dataset learning setting.
Within this benchmark setup, we are able to present new state-of-the-art results on five of the datasets.
arXiv Detail & Related papers (2020-01-06T13:37:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.