Fairness-Aware Deepfake Detection: Leveraging Dual-Mechanism Optimization
- URL: http://arxiv.org/abs/2511.10150v3
- Date: Wed, 19 Nov 2025 14:27:30 GMT
- Title: Fairness-Aware Deepfake Detection: Leveraging Dual-Mechanism Optimization
- Authors: Feng Ding, Wenhui Yi, Yunpeng Zhou, Xinan He, Hong Rao, Shu Hu,
- Abstract summary: Biases in detection models toward different demographic groups, such as gender and race, may lead to systemic misjudgments.<n>We propose a dual-mechanism collaborative optimization framework to address this challenge.<n>Our framework improves both inter-group and intra-group fairness while maintaining overall detection accuracy across domains.
- Score: 13.52582347670271
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fairness is a core element in the trustworthy deployment of deepfake detection models, especially in the field of digital identity security. Biases in detection models toward different demographic groups, such as gender and race, may lead to systemic misjudgments, exacerbating the digital divide and social inequities. However, current fairness-enhanced detectors often improve fairness at the cost of detection accuracy. To address this challenge, we propose a dual-mechanism collaborative optimization framework. Our proposed method innovatively integrates structural fairness decoupling and global distribution alignment: decoupling channels sensitive to demographic groups at the model architectural level, and subsequently reducing the distance between the overall sample distribution and the distributions corresponding to each demographic group at the feature level. Experimental results demonstrate that, compared with other methods, our framework improves both inter-group and intra-group fairness while maintaining overall detection accuracy across domains.
Related papers
- Fair Deepfake Detectors Can Generalize [51.21167546843708]
We show that controlling for confounders (data distribution and model capacity) enables improved generalization via fairness interventions.<n>Motivated by this insight, we propose Demographic Attribute-insensitive Intervention Detection (DAID), a plug-and-play framework composed of: i) Demographic-aware data rebalancing, which employs inverse-propensity weighting and subgroup-wise feature normalization to neutralize distributional biases; and ii) Demographic-agnostic feature aggregation, which uses a novel alignment loss to suppress sensitive-attribute signals.<n>DAID consistently achieves superior performance in both fairness and generalization compared to several state-of-the-art
arXiv Detail & Related papers (2025-07-03T14:10:02Z) - On the Interconnections of Calibration, Quantification, and Classifier Accuracy Prediction under Dataset Shift [58.91436551466064]
This paper investigates the interconnections among three fundamental problems, calibration, and quantification, under dataset shift conditions.<n>We show that access to an oracle for any one of these tasks enables the resolution of the other two.<n>We propose new methods for each problem based on direct adaptations of well-established methods borrowed from the other disciplines.
arXiv Detail & Related papers (2025-05-16T15:42:55Z) - Fairness-aware Anomaly Detection via Fair Projection [24.68178499460169]
Unsupervised anomaly detection is critical in high-social-impact applications such as finance, healthcare, social media, and cybersecurity.<n>In these scenarios, possible bias from anomaly detection systems can lead to unfair treatment for different groups and even exacerbate social bias.<n>We propose a novel fairness-aware anomaly detection method FairAD.
arXiv Detail & Related papers (2025-05-16T11:26:00Z) - Data-Driven Fairness Generalization for Deepfake Detection [1.2221087476416053]
biases in the training data for deepfake detection can result in varying levels of performance across different demographic groups.<n>We propose a data-driven framework for tackling the fairness generalization problem in deepfake detection by leveraging synthetic datasets and model optimization.
arXiv Detail & Related papers (2024-12-21T01:28:35Z) - Redundant Semantic Environment Filling via Misleading-Learning for Fair Deepfake Detection [41.53648814855822]
Deepfake technology is essential for safeguarding trust in digital communication and protecting individuals.<n>Current detectors often suffer from a dual-overfitting: they become overly specialized in both specific fingerprints and particular demographic attributes.<n>We propose a novel strategy called misleading-learning, which populates the latent space with a multitude of redundant environments.
arXiv Detail & Related papers (2024-05-24T03:12:57Z) - On the Fairness ROAD: Robust Optimization for Adversarial Debiasing [46.495095664915986]
ROAD is designed to prioritize inputs that are likely to be locally unfair.
It achieves dominance with respect to local fairness and accuracy for a given global fairness level.
It also enhances fairness generalization under distribution shift.
arXiv Detail & Related papers (2023-10-27T18:08:42Z) - Bi-discriminator Domain Adversarial Neural Networks with Class-Level
Gradient Alignment [87.8301166955305]
We propose a novel bi-discriminator domain adversarial neural network with class-level gradient alignment.
BACG resorts to gradient signals and second-order probability estimation for better alignment of domain distributions.
In addition, inspired by contrastive learning, we develop a memory bank-based variant, i.e. Fast-BACG, which can greatly shorten the training process.
arXiv Detail & Related papers (2023-10-21T09:53:17Z) - Coarse to Fine: Domain Adaptive Crowd Counting via Adversarial Scoring
Network [58.05473757538834]
This paper proposes a novel adversarial scoring network (ASNet) to bridge the gap across domains from coarse to fine granularity.
Three sets of migration experiments show that the proposed methods achieve state-of-the-art counting performance.
arXiv Detail & Related papers (2021-07-27T14:47:24Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Towards Fair Knowledge Transfer for Imbalanced Domain Adaptation [61.317911756566126]
We propose a Towards Fair Knowledge Transfer framework to handle the fairness challenge in imbalanced cross-domain learning.
Specifically, a novel cross-domain mixup generation is exploited to augment the minority source set with target information to enhance fairness.
Our model significantly improves over 20% on two benchmarks in terms of the overall accuracy.
arXiv Detail & Related papers (2020-10-23T06:29:09Z) - Global Distance-distributions Separation for Unsupervised Person
Re-identification [93.39253443415392]
Existing unsupervised ReID approaches often fail in correctly identifying the positive samples and negative samples through the distance-based matching/ranking.
We introduce a global distance-distributions separation constraint over the two distributions to encourage the clear separation of positive and negative samples from a global view.
We show that our method leads to significant improvement over the baselines and achieves the state-of-the-art performance.
arXiv Detail & Related papers (2020-06-01T07:05:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.