Fair Deepfake Detectors Can Generalize
- URL: http://arxiv.org/abs/2507.02645v1
- Date: Thu, 03 Jul 2025 14:10:02 GMT
- Title: Fair Deepfake Detectors Can Generalize
- Authors: Harry Cheng, Ming-Hui Liu, Yangyang Guo, Tianyi Wang, Liqiang Nie, Mohan Kankanhalli,
- Abstract summary: We show that controlling for confounders (data distribution and model capacity) enables improved generalization via fairness interventions.<n>Motivated by this insight, we propose Demographic Attribute-insensitive Intervention Detection (DAID), a plug-and-play framework composed of: i) Demographic-aware data rebalancing, which employs inverse-propensity weighting and subgroup-wise feature normalization to neutralize distributional biases; and ii) Demographic-agnostic feature aggregation, which uses a novel alignment loss to suppress sensitive-attribute signals.<n>DAID consistently achieves superior performance in both fairness and generalization compared to several state-of-the-art
- Score: 51.21167546843708
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deepfake detection models face two critical challenges: generalization to unseen manipulations and demographic fairness among population groups. However, existing approaches often demonstrate that these two objectives are inherently conflicting, revealing a trade-off between them. In this paper, we, for the first time, uncover and formally define a causal relationship between fairness and generalization. Building on the back-door adjustment, we show that controlling for confounders (data distribution and model capacity) enables improved generalization via fairness interventions. Motivated by this insight, we propose Demographic Attribute-insensitive Intervention Detection (DAID), a plug-and-play framework composed of: i) Demographic-aware data rebalancing, which employs inverse-propensity weighting and subgroup-wise feature normalization to neutralize distributional biases; and ii) Demographic-agnostic feature aggregation, which uses a novel alignment loss to suppress sensitive-attribute signals. Across three cross-domain benchmarks, DAID consistently achieves superior performance in both fairness and generalization compared to several state-of-the-art detectors, validating both its theoretical foundation and practical effectiveness.
Related papers
- Fairness-aware Anomaly Detection via Fair Projection [24.68178499460169]
Unsupervised anomaly detection is critical in high-social-impact applications such as finance, healthcare, social media, and cybersecurity.<n>In these scenarios, possible bias from anomaly detection systems can lead to unfair treatment for different groups and even exacerbate social bias.<n>We propose a novel fairness-aware anomaly detection method FairAD.
arXiv Detail & Related papers (2025-05-16T11:26:00Z) - Causally Fair Node Classification on Non-IID Graph Data [9.363036392218435]
This paper addresses the prevalent challenge in fairness-aware ML algorithms.<n>We tackle the overlooked domain of non-IID, graph-based settings.<n>We develop the Message Passing Variational Autoencoder for Causal Inference.
arXiv Detail & Related papers (2025-05-03T02:05:51Z) - Robust Distribution Alignment for Industrial Anomaly Detection under Distribution Shift [51.24522135151649]
Anomaly detection plays a crucial role in quality control for industrial applications.<n>Existing methods attempt to address domain shifts by training generalizable models.<n>Our proposed method demonstrates superior results compared with state-of-the-art anomaly detection and domain adaptation methods.
arXiv Detail & Related papers (2025-03-19T05:25:52Z) - Out-of-Distribution Detection on Graphs: A Survey [58.47395497985277]
Graph out-of-distribution (GOOD) detection focuses on identifying graph data that deviates from the distribution seen during training.<n>We categorize existing methods into four types: enhancement-based, reconstruction-based, information propagation-based, and classification-based approaches.<n>We discuss practical applications and theoretical foundations, highlighting the unique challenges posed by graph data.
arXiv Detail & Related papers (2025-02-12T04:07:12Z) - Enhancing Fairness in Unsupervised Graph Anomaly Detection through Disentanglement [33.565252991113766]
Graph anomaly detection (GAD) is increasingly crucial in various applications, ranging from financial fraud detection to fake news detection.<n>Current GAD methods largely overlook the fairness problem, which might result in discriminatory decisions skewed toward certain demographic groups.<n>We devise a novel DisEntangle-based FairnEss-aware aNomaly Detection framework on the attributed graph, named DEFEND.<n>Our empirical evaluations on real-world datasets reveal that DEFEND performs effectively in GAD and significantly enhances fairness compared to state-of-the-art baselines.
arXiv Detail & Related papers (2024-06-03T04:48:45Z) - Exploring the Relationship between Samples and Masks for Robust Defect
Localization [1.90365714903665]
This paper proposes a one-stage framework that detects defective patterns directly without the modeling process.
Explicit information that could indicate the position of defects is intentionally excluded to avoid learning any direct mapping.
Results show that the proposed method is 2.9% higher than the SOTA methods in F1-Score, while substantially outperforming SOTA methods in generalizability.
arXiv Detail & Related papers (2023-06-19T06:41:19Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - Calibrated Feature Decomposition for Generalizable Person
Re-Identification [82.64133819313186]
Calibrated Feature Decomposition (CFD) module focuses on improving the generalization capacity for person re-identification.
A calibrated-and-standardized Batch normalization (CSBN) is designed to learn calibrated person representation.
arXiv Detail & Related papers (2021-11-27T17:12:43Z) - Fairness without the sensitive attribute via Causal Variational
Autoencoder [17.675997789073907]
Due to privacy purposes and var-ious regulations such as RGPD in EU, many personal sensitive attributes are frequently not collected.
By leveraging recent developments for approximate inference, we propose an approach to fill this gap.
Based on a causal graph, we rely on a new variational auto-encoding based framework named SRCVAE to infer a sensitive information proxy.
arXiv Detail & Related papers (2021-09-10T17:12:52Z) - Towards Fair Knowledge Transfer for Imbalanced Domain Adaptation [61.317911756566126]
We propose a Towards Fair Knowledge Transfer framework to handle the fairness challenge in imbalanced cross-domain learning.
Specifically, a novel cross-domain mixup generation is exploited to augment the minority source set with target information to enhance fairness.
Our model significantly improves over 20% on two benchmarks in terms of the overall accuracy.
arXiv Detail & Related papers (2020-10-23T06:29:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.