Accurate Fairness: Improving Individual Fairness without Trading
Accuracy
- URL: http://arxiv.org/abs/2205.08704v2
- Date: Wed, 30 Nov 2022 14:10:41 GMT
- Title: Accurate Fairness: Improving Individual Fairness without Trading
Accuracy
- Authors: Xuran Li, Peng Wu, Jing Su
- Abstract summary: We propose a new fairness criterion, accurate fairness, to align individual fairness with accuracy.
We prove that accurate fairness also implies typical group fairness criteria over a union of similar sub-populations.
To the best of our knowledge, this is the first time that a Siamese approach is adapted for bias mitigation.
- Score: 4.0415037006237595
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accuracy and individual fairness are both crucial for trustworthy machine
learning, but these two aspects are often incompatible with each other so that
enhancing one aspect may sacrifice the other inevitably with side effects of
true bias or false fairness. We propose in this paper a new fairness criterion,
accurate fairness, to align individual fairness with accuracy. Informally, it
requires the treatments of an individual and the individual's similar
counterparts to conform to a uniform target, i.e., the ground truth of the
individual. We prove that accurate fairness also implies typical group fairness
criteria over a union of similar sub-populations. We then present a Siamese
fairness in-processing approach to minimize the accuracy and fairness losses of
a machine learning model under the accurate fairness constraints. To the best
of our knowledge, this is the first time that a Siamese approach is adapted for
bias mitigation. We also propose fairness confusion matrix-based metrics,
fair-precision, fair-recall, and fair-F1 score, to quantify a trade-off between
accuracy and individual fairness. Comparative case studies with popular
fairness datasets show that our Siamese fairness approach can achieve on
average 1.02%-8.78% higher individual fairness (in terms of fairness through
awareness) and 8.38%-13.69% higher accuracy, as well as 10.09%-20.57% higher
true fair rate, and 5.43%-10.01% higher fair-F1 score, than the
state-of-the-art bias mitigation techniques. This demonstrates that our Siamese
fairness approach can indeed improve individual fairness without trading
accuracy. Finally, the accurate fairness criterion and Siamese fairness
approach are applied to mitigate the possible service discrimination with a
real Ctrip dataset, by on average fairly serving 112.33% more customers
(specifically, 81.29% more customers in an accurately fair way) than baseline
models.
Related papers
- BadFair: Backdoored Fairness Attacks with Group-conditioned Triggers [11.406478357477292]
We introduce BadFair, a novel backdoored fairness attack methodology.
BadFair stealthily crafts a model that operates with accuracy and fairness under regular conditions but, when activated by certain triggers, discriminates and produces incorrect results for specific groups.
Our findings reveal that BadFair achieves a more than 85% attack success rate in attacks aimed at target groups on average while only incurring a minimal accuracy loss.
arXiv Detail & Related papers (2024-10-23T01:14:54Z) - Learning Fairer Representations with FairVIC [0.0]
Mitigating bias in automated decision-making systems is a critical challenge due to nuanced definitions of fairness and dataset-specific biases.
We introduce FairVIC, an innovative approach that enhances fairness in neural networks by integrating variance, invariance, and covariance terms into the loss function during training.
We evaluate FairVIC against comparable bias mitigation techniques on benchmark datasets, considering both group and individual fairness, and conduct an ablation study on the accuracy-fairness trade-off.
arXiv Detail & Related papers (2024-04-28T10:10:21Z) - Causal Context Connects Counterfactual Fairness to Robust Prediction and
Group Fairness [15.83823345486604]
We motivatefactual fairness by showing that there is not a fundamental trade-off between fairness and accuracy.
Counterfactual fairness can sometimes be tested by measuring relatively simple group fairness metrics.
arXiv Detail & Related papers (2023-10-30T16:07:57Z) - Navigating Fairness Measures and Trade-Offs [0.0]
I show that by using Rawls' notion of justice as fairness, we can create a basis for navigating fairness measures and the accuracy trade-off.
This also helps to close part of the gap between philosophical accounts of distributive justice and the fairness literature.
arXiv Detail & Related papers (2023-07-17T13:45:47Z) - Fair-CDA: Continuous and Directional Augmentation for Group Fairness [48.84385689186208]
We propose a fine-grained data augmentation strategy for imposing fairness constraints.
We show that group fairness can be achieved by regularizing the models on transition paths of sensitive features between groups.
Our proposed method does not assume any data generative model and ensures good generalization for both accuracy and fairness.
arXiv Detail & Related papers (2023-04-01T11:23:00Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Improving Robust Fairness via Balance Adversarial Training [51.67643171193376]
Adversarial training (AT) methods are effective against adversarial attacks, yet they introduce severe disparity of accuracy and robustness between different classes.
We propose Adversarial Training (BAT) to address the robust fairness problem.
arXiv Detail & Related papers (2022-09-15T14:44:48Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - Towards Equal Opportunity Fairness through Adversarial Learning [64.45845091719002]
Adversarial training is a common approach for bias mitigation in natural language processing.
We propose an augmented discriminator for adversarial training, which takes the target class as input to create richer features.
arXiv Detail & Related papers (2022-03-12T02:22:58Z) - Parity-based Cumulative Fairness-aware Boosting [7.824964622317634]
Data-driven AI systems can lead to discrimination on the basis of protected attributes like gender or race.
We propose AdaFair, a fairness-aware boosting ensemble that changes the data distribution at each round.
Our experiments show that our approach can achieve parity in terms of statistical parity, equal opportunity, and disparate mistreatment.
arXiv Detail & Related papers (2022-01-04T14:16:36Z) - Two Simple Ways to Learn Individual Fairness Metrics from Data [47.6390279192406]
Individual fairness is an intuitive definition of algorithmic fairness that addresses some of the drawbacks of group fairness.
The lack of a widely accepted fair metric for many ML tasks is the main barrier to broader adoption of individual fairness.
We show empirically that fair training with the learned metrics leads to improved fairness on three machine learning tasks susceptible to gender and racial biases.
arXiv Detail & Related papers (2020-06-19T23:47:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.