Unfairness Despite Awareness: Group-Fair Classification with Strategic
Agents
- URL: http://arxiv.org/abs/2112.02746v1
- Date: Mon, 6 Dec 2021 02:42:43 GMT
- Title: Unfairness Despite Awareness: Group-Fair Classification with Strategic
Agents
- Authors: Andrew Estornell, Sanmay Das, Yang Liu, Yevgeniy Vorobeychik
- Abstract summary: We show that strategic agents may possess both the ability and the incentive to manipulate an observed feature vector in order to attain a more favorable outcome.
We further demonstrate that both the increased selectiveness of the fair classifier, and consequently the loss of fairness, arises when performing fair learning on domains in which the advantaged group is overrepresented.
- Score: 37.31138342300617
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The use of algorithmic decision making systems in domains which impact the
financial, social, and political well-being of people has created a demand for
these decision making systems to be "fair" under some accepted notion of
equity. This demand has in turn inspired a large body of work focused on the
development of fair learning algorithms which are then used in lieu of their
conventional counterparts. Most analysis of such fair algorithms proceeds from
the assumption that the people affected by the algorithmic decisions are
represented as immutable feature vectors. However, strategic agents may possess
both the ability and the incentive to manipulate this observed feature vector
in order to attain a more favorable outcome. We explore the impact that
strategic agent behavior could have on fair classifiers and derive conditions
under which this behavior leads to fair classifiers becoming less fair than
their conventional counterparts under the same measure of fairness that the
fair classifier takes into account. These conditions are related to the the way
in which the fair classifier remedies unfairness on the original unmanipulated
data: fair classifiers which remedy unfairness by becoming more selective than
their conventional counterparts are the ones that become less fair than their
counterparts when agents are strategic. We further demonstrate that both the
increased selectiveness of the fair classifier, and consequently the loss of
fairness, arises when performing fair learning on domains in which the
advantaged group is overrepresented in the region near (and on the beneficial
side of) the decision boundary of conventional classifiers. Finally, we observe
experimentally, using several datasets and learning methods, that this fairness
reversal is common, and that our theoretical characterization of the fairness
reversal conditions indeed holds in most such cases.
Related papers
- Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Fairness in Algorithmic Recourse Through the Lens of Substantive
Equality of Opportunity [15.78130132380848]
Algorithmic recourse has gained attention as a means of giving persons agency in their interactions with AI systems.
Recent work has shown that recourse itself may be unfair due to differences in the initial circumstances of individuals.
Time is a critical element in recourse because the longer it takes an individual to act, the more the setting may change.
arXiv Detail & Related papers (2024-01-29T11:55:45Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - The Fairness of Credit Scoring Models [0.0]
In credit markets, screening algorithms aim to discriminate between good-type and bad-type borrowers.
This can be unintentional and originate from the training dataset or from the model itself.
We show how to formally test the algorithmic fairness of scoring models and how to identify the variables responsible for any lack of fairness.
arXiv Detail & Related papers (2022-05-20T14:20:40Z) - Towards Equal Opportunity Fairness through Adversarial Learning [64.45845091719002]
Adversarial training is a common approach for bias mitigation in natural language processing.
We propose an augmented discriminator for adversarial training, which takes the target class as input to create richer features.
arXiv Detail & Related papers (2022-03-12T02:22:58Z) - Optimal Transport of Binary Classifiers to Fairness [16.588468396705366]
We show that Optimal Transport to Fairness (OTF) can be used to achieve an effective trade-off between predictive power and fairness.
Experiments show that OTF can be used to achieve an effective trade-off between predictive power and fairness.
arXiv Detail & Related papers (2022-02-08T12:16:24Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Everything is Relative: Understanding Fairness with Optimal Transport [1.160208922584163]
We present an optimal transport-based approach to fairness that offers an interpretable and quantifiable exploration of bias and its structure.
Our framework is able to recover well known examples of algorithmic discrimination, detect unfairness when other metrics fail, and explore recourse opportunities.
arXiv Detail & Related papers (2021-02-20T13:57:53Z) - On the Fairness of Causal Algorithmic Recourse [36.519629650529666]
We propose two new fairness criteria at the group and individual level.
We show that fairness of recourse is complementary to fairness of prediction.
We discuss whether fairness violations in the data generating process revealed by our criteria may be better addressed by societal interventions.
arXiv Detail & Related papers (2020-10-13T16:35:06Z) - Algorithmic Decision Making with Conditional Fairness [48.76267073341723]
We define conditional fairness as a more sound fairness metric by conditioning on the fairness variables.
We propose a Derivable Conditional Fairness Regularizer (DCFR) to track the trade-off between precision and fairness of algorithmic decision making.
arXiv Detail & Related papers (2020-06-18T12:56:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.