Inherent Trade-offs in the Fair Allocation of Treatments
- URL: http://arxiv.org/abs/2010.16409v1
- Date: Fri, 30 Oct 2020 17:55:00 GMT
- Title: Inherent Trade-offs in the Fair Allocation of Treatments
- Authors: Yuzi He, Keith Burghardt, Siyi Guo, Kristina Lerman
- Abstract summary: Explicit and implicit bias clouds human judgement, leading to discriminatory treatment of minority groups.
We propose a causal framework that learns optimal intervention policies from data subject to fairness constraints.
- Score: 2.6143568807090696
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explicit and implicit bias clouds human judgement, leading to discriminatory
treatment of minority groups. A fundamental goal of algorithmic fairness is to
avoid the pitfalls in human judgement by learning policies that improve the
overall outcomes while providing fair treatment to protected classes. In this
paper, we propose a causal framework that learns optimal intervention policies
from data subject to fairness constraints. We define two measures of treatment
bias and infer best treatment assignment that minimizes the bias while
optimizing overall outcome. We demonstrate that there is a dilemma of balancing
fairness and overall benefit; however, allowing preferential treatment to
protected classes in certain circumstances (affirmative action) can
dramatically improve the overall benefit while also preserving fairness. We
apply our framework to data containing student outcomes on standardized tests
and show how it can be used to design real-world policies that fairly improve
student test scores. Our framework provides a principled way to learn fair
treatment policies in real-world settings.
Related papers
- Towards Harmless Rawlsian Fairness Regardless of Demographic Prior [57.30787578956235]
We explore the potential for achieving fairness without compromising its utility when no prior demographics are provided to the training set.
We propose a simple but effective method named VFair to minimize the variance of training losses inside the optimal set of empirical losses.
arXiv Detail & Related papers (2024-11-04T12:40:34Z) - Fair Off-Policy Learning from Observational Data [30.77874108094485]
We propose a novel framework for fair off-policy learning.
We first formalize different fairness notions for off-policy learning.
We then propose a neural network-based framework to learn optimal policies under different fairness notions.
arXiv Detail & Related papers (2023-03-15T10:47:48Z) - Improving Robust Fairness via Balance Adversarial Training [51.67643171193376]
Adversarial training (AT) methods are effective against adversarial attacks, yet they introduce severe disparity of accuracy and robustness between different classes.
We propose Adversarial Training (BAT) to address the robust fairness problem.
arXiv Detail & Related papers (2022-09-15T14:44:48Z) - Optimising Equal Opportunity Fairness in Model Training [60.0947291284978]
Existing debiasing methods, such as adversarial training and removing protected information from representations, have been shown to reduce bias.
We propose two novel training objectives which directly optimise for the widely-used criterion of it equal opportunity, and show that they are effective in reducing bias while maintaining high performance over two classification tasks.
arXiv Detail & Related papers (2022-05-05T01:57:58Z) - Towards Equal Opportunity Fairness through Adversarial Learning [64.45845091719002]
Adversarial training is a common approach for bias mitigation in natural language processing.
We propose an augmented discriminator for adversarial training, which takes the target class as input to create richer features.
arXiv Detail & Related papers (2022-03-12T02:22:58Z) - On the Fairness of Causal Algorithmic Recourse [36.519629650529666]
We propose two new fairness criteria at the group and individual level.
We show that fairness of recourse is complementary to fairness of prediction.
We discuss whether fairness violations in the data generating process revealed by our criteria may be better addressed by societal interventions.
arXiv Detail & Related papers (2020-10-13T16:35:06Z) - Fairness without Demographics through Adversarially Reweighted Learning [20.803276801890657]
We train an ML model to improve fairness when we do not even know the protected group memberships.
In particular, we hypothesize that non-protected features and task labels are valuable for identifying fairness issues.
Our results show that ARL improves Rawlsian Max-Min fairness, with notable AUC improvements for worst-case protected groups in multiple datasets.
arXiv Detail & Related papers (2020-06-23T16:06:52Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z) - Fair Policy Targeting [0.6091702876917281]
One of the major concerns of targeting interventions on individuals in social welfare programs is discrimination.
This paper addresses the question of the design of fair and efficient treatment allocation rules.
arXiv Detail & Related papers (2020-05-25T20:45:25Z) - Robust Optimization for Fairness with Noisy Protected Groups [85.13255550021495]
We study the consequences of naively relying on noisy protected group labels.
We introduce two new approaches using robust optimization.
We show that the robust approaches achieve better true group fairness guarantees than the naive approach.
arXiv Detail & Related papers (2020-02-21T14:58:37Z) - The Measure and Mismeasure of Fairness [6.6697126372463345]
We argue that the equitable design of algorithms requires grappling with their context-specific consequences.
We offer strategies to ensure algorithms are better aligned with policy goals.
arXiv Detail & Related papers (2018-07-31T18:38:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.