Enforcing Delayed-Impact Fairness Guarantees
- URL: http://arxiv.org/abs/2208.11744v1
- Date: Wed, 24 Aug 2022 19:14:56 GMT
- Title: Enforcing Delayed-Impact Fairness Guarantees
- Authors: Aline Weber, Blossom Metevier, Yuriy Brun, Philip S. Thomas, Bruno
Castro da Silva
- Abstract summary: We introduce ELF, the first classification algorithm that provides high-confidence fairness guarantees in terms of long-term, or delayed, impact.
We show experimentally that our algorithm can successfully mitigate long-term unfairness.
- Score: 21.368958668652652
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent research has shown that seemingly fair machine learning models, when
used to inform decisions that have an impact on peoples' lives or well-being
(e.g., applications involving education, employment, and lending), can
inadvertently increase social inequality in the long term. This is because
prior fairness-aware algorithms only consider static fairness constraints, such
as equal opportunity or demographic parity. However, enforcing constraints of
this type may result in models that have negative long-term impact on
disadvantaged individuals and communities. We introduce ELF (Enforcing
Long-term Fairness), the first classification algorithm that provides
high-confidence fairness guarantees in terms of long-term, or delayed, impact.
We prove that the probability that ELF returns an unfair solution is less than
a user-specified tolerance and that (under mild assumptions), given sufficient
training data, ELF is able to find and return a fair solution if one exists. We
show experimentally that our algorithm can successfully mitigate long-term
unfairness.
Related papers
- Towards Harmless Rawlsian Fairness Regardless of Demographic Prior [57.30787578956235]
We explore the potential for achieving fairness without compromising its utility when no prior demographics are provided to the training set.
We propose a simple but effective method named VFair to minimize the variance of training losses inside the optimal set of empirical losses.
arXiv Detail & Related papers (2024-11-04T12:40:34Z) - Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - What Hides behind Unfairness? Exploring Dynamics Fairness in Reinforcement Learning [52.51430732904994]
In reinforcement learning problems, agents must consider long-term fairness while maximizing returns.
Recent works have proposed many different types of fairness notions, but how unfairness arises in RL problems remains unclear.
We introduce a novel notion called dynamics fairness, which explicitly captures the inequality stemming from environmental dynamics.
arXiv Detail & Related papers (2024-04-16T22:47:59Z) - Equal Opportunity of Coverage in Fair Regression [50.76908018786335]
We study fair machine learning (ML) under predictive uncertainty to enable reliable and trustworthy decision-making.
We propose Equal Opportunity of Coverage (EOC) that aims to achieve two properties: (1) coverage rates for different groups with similar outcomes are close, and (2) the coverage rate for the entire population remains at a predetermined level.
arXiv Detail & Related papers (2023-11-03T21:19:59Z) - Adapting Static Fairness to Sequential Decision-Making: Bias Mitigation Strategies towards Equal Long-term Benefit Rate [41.51680686036846]
We introduce a long-term fairness concept named Equal Long-term Benefit Rate (ELBERT) to address biases in sequential decision-making.
ELBERT effectively addresses the temporal discrimination issues found in previous long-term fairness notions.
We show that ELBERT-PO significantly diminishes bias while maintaining high utility.
arXiv Detail & Related papers (2023-09-07T01:10:01Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Fairness Reprogramming [42.65700878967251]
We propose a new generic fairness learning paradigm, called FairReprogram, which incorporates the model reprogramming technique.
Specifically, FairReprogram considers the case where models can not be changed and appends to the input a set of perturbations, called the fairness trigger.
We show both theoretically and empirically that the fairness trigger can effectively obscure demographic biases in the output prediction of fixed ML models.
arXiv Detail & Related papers (2022-09-21T09:37:00Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - Metrizing Fairness [5.323439381187456]
We study supervised learning problems that have significant effects on individuals from two demographic groups.
We seek predictors that are fair with respect to a group fairness criterion such as statistical parity (SP)
In this paper, we identify conditions under which hard SP constraints are guaranteed to improve predictive accuracy.
arXiv Detail & Related papers (2022-05-30T12:28:10Z) - Fairness without Demographics through Adversarially Reweighted Learning [20.803276801890657]
We train an ML model to improve fairness when we do not even know the protected group memberships.
In particular, we hypothesize that non-protected features and task labels are valuable for identifying fairness issues.
Our results show that ARL improves Rawlsian Max-Min fairness, with notable AUC improvements for worst-case protected groups in multiple datasets.
arXiv Detail & Related papers (2020-06-23T16:06:52Z) - Learning Individually Fair Classifier with Path-Specific Causal-Effect
Constraint [31.86959207229775]
In this paper, we propose a framework for learning an individually fair classifier.
We define the it probability of individual unfairness (PIU) and solve an optimization problem where PIU's upper bound, which can be estimated from data, is controlled to be close to zero.
Experimental results show that our method can learn an individually fair classifier at a slight cost of accuracy.
arXiv Detail & Related papers (2020-02-17T02:46:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.