Fair-CDA: Continuous and Directional Augmentation for Group Fairness
- URL: http://arxiv.org/abs/2304.00295v1
- Date: Sat, 1 Apr 2023 11:23:00 GMT
- Title: Fair-CDA: Continuous and Directional Augmentation for Group Fairness
- Authors: Rui Sun, Fengwei Zhou, Zhenhua Dong, Chuanlong Xie, Lanqing Hong,
Jiawei Li, Rui Zhang, Zhen Li, Zhenguo Li
- Abstract summary: We propose a fine-grained data augmentation strategy for imposing fairness constraints.
We show that group fairness can be achieved by regularizing the models on transition paths of sensitive features between groups.
Our proposed method does not assume any data generative model and ensures good generalization for both accuracy and fairness.
- Score: 48.84385689186208
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we propose {\it Fair-CDA}, a fine-grained data augmentation
strategy for imposing fairness constraints. We use a feature disentanglement
method to extract the features highly related to the sensitive attributes. Then
we show that group fairness can be achieved by regularizing the models on
transition paths of sensitive features between groups. By adjusting the
perturbation strength in the direction of the paths, our proposed augmentation
is controllable and auditable. To alleviate the accuracy degradation caused by
fairness constraints, we further introduce a calibrated model to impute labels
for the augmented data. Our proposed method does not assume any data generative
model and ensures good generalization for both accuracy and fairness.
Experimental results show that Fair-CDA consistently outperforms
state-of-the-art methods on widely-used benchmarks, e.g., Adult, CelebA and
MovieLens. Especially, Fair-CDA obtains an 86.3\% relative improvement for
fairness while maintaining the accuracy on the Adult dataset. Moreover, we
evaluate Fair-CDA in an online recommendation system to demonstrate the
effectiveness of our method in terms of accuracy and fairness.
Related papers
- FairDgcl: Fairness-aware Recommendation with Dynamic Graph Contrastive Learning [48.38344934125999]
We study how to implement high-quality data augmentation to improve recommendation fairness.
Specifically, we propose FairDgcl, a dynamic graph adversarial contrastive learning framework.
We show that FairDgcl can simultaneously generate enhanced representations that possess both fairness and accuracy.
arXiv Detail & Related papers (2024-10-23T04:43:03Z) - Thinking Racial Bias in Fair Forgery Detection: Models, Datasets and Evaluations [63.52709761339949]
We first contribute a dedicated dataset called the Fair Forgery Detection (FairFD) dataset, where we prove the racial bias of public state-of-the-art (SOTA) methods.
We design novel metrics including Approach Averaged Metric and Utility Regularized Metric, which can avoid deceptive results.
We also present an effective and robust post-processing technique, Bias Pruning with Fair Activations (BPFA), which improves fairness without requiring retraining or weight updates.
arXiv Detail & Related papers (2024-07-19T14:53:18Z) - FADE: Towards Fairness-aware Augmentation for Domain Generalization via Classifier-Guided Score-based Diffusion Models [9.734351986961613]
Fairness-aware domain generalization (FairDG) has emerged as a critical challenge for deploying trustworthy AI systems.
Traditional methods for addressing fairness have failed in domain generalization due to their lack of consideration for distribution shifts.
We propose Fairness-aware Score-Guided Diffusion Models (FADE) as a novel approach to effectively address the FairDG issue.
arXiv Detail & Related papers (2024-06-13T17:36:05Z) - Distributionally Generative Augmentation for Fair Facial Attribute Classification [69.97710556164698]
Facial Attribute Classification (FAC) holds substantial promise in widespread applications.
FAC models trained by traditional methodologies can be unfair by exhibiting accuracy inconsistencies across varied data subpopulations.
This work proposes a novel, generation-based two-stage framework to train a fair FAC model on biased data without additional annotation.
arXiv Detail & Related papers (2024-03-11T10:50:53Z) - FairAdaBN: Mitigating unfairness with adaptive batch normalization and
its application to dermatological disease classification [14.589159162086926]
We propose FairAdaBN, which makes batch normalization adaptive to sensitive attribute.
We propose a new metric, named Fairness-Accuracy Trade-off Efficiency (FATE), to compute normalized fairness improvement over accuracy drop.
Experiments on two dermatological datasets show that our proposed method outperforms other methods on fairness criteria and FATE.
arXiv Detail & Related papers (2023-03-15T02:22:07Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - Improving Recommendation Fairness via Data Augmentation [66.4071365614835]
Collaborative filtering based recommendation learns users' preferences from all users' historical behavior data, and has been popular to facilitate decision making.
A recommender system is considered unfair when it does not perform equally well for different user groups according to users' sensitive attributes.
In this paper, we study how to improve recommendation fairness from the data augmentation perspective.
arXiv Detail & Related papers (2023-02-13T13:11:46Z) - Promoting Fairness through Hyperparameter Optimization [4.479834103607383]
This work explores, in the context of a real-world fraud detection application, the unfairness that emerges from traditional ML model development.
We propose and evaluate fairness-aware variants of three popular HO algorithms: Fair Random Search, Fair TPE, and Fairband.
We validate our approach on a real-world bank account opening fraud use case, as well as on three datasets from the fairness literature.
arXiv Detail & Related papers (2021-03-23T17:36:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.