The Possibility of Fairness: Revisiting the Impossibility Theorem in
Practice
- URL: http://arxiv.org/abs/2302.06347v1
- Date: Mon, 13 Feb 2023 13:29:24 GMT
- Title: The Possibility of Fairness: Revisiting the Impossibility Theorem in
Practice
- Authors: Andrew Bell, Lucius Bynum, Nazarii Drushchak, Tetiana Herasymova,
Lucas Rosenblatt, Julia Stoyanovich
- Abstract summary: We show that it is possible to identify a large set of models that satisfy seemingly incompatible fairness constraints.
We offer tools and guidance for practitioners to understand when -- and to what degree -- fairness along multiple criteria can be achieved.
- Score: 5.175941513195566
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: The ``impossibility theorem'' -- which is considered foundational in
algorithmic fairness literature -- asserts that there must be trade-offs
between common notions of fairness and performance when fitting statistical
models, except in two special cases: when the prevalence of the outcome being
predicted is equal across groups, or when a perfectly accurate predictor is
used. However, theory does not always translate to practice. In this work, we
challenge the implications of the impossibility theorem in practical settings.
First, we show analytically that, by slightly relaxing the impossibility
theorem (to accommodate a \textit{practitioner's} perspective of fairness), it
becomes possible to identify a large set of models that satisfy seemingly
incompatible fairness constraints. Second, we demonstrate the existence of
these models through extensive experiments on five real-world datasets. We
conclude by offering tools and guidance for practitioners to understand when --
and to what degree -- fairness along multiple criteria can be achieved. For
example, if one allows only a small margin-of-error between metrics, there
exists a large set of models simultaneously satisfying \emph{False Negative
Rate Parity}, \emph{False Positive Rate Parity}, and \emph{Positive Predictive
Value Parity}, even when there is a moderate prevalence difference between
groups. This work has an important implication for the community: achieving
fairness along multiple metrics for multiple groups (and their intersections)
is much more possible than was previously believed.
Related papers
- Prototype-based Aleatoric Uncertainty Quantification for Cross-modal
Retrieval [139.21955930418815]
Cross-modal Retrieval methods build similarity relations between vision and language modalities by jointly learning a common representation space.
However, the predictions are often unreliable due to the Aleatoric uncertainty, which is induced by low-quality data, e.g., corrupt images, fast-paced videos, and non-detailed texts.
We propose a novel Prototype-based Aleatoric Uncertainty Quantification (PAU) framework to provide trustworthy predictions by quantifying the uncertainty arisen from the inherent data ambiguity.
arXiv Detail & Related papers (2023-09-29T09:41:19Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Reconciling Predictive and Statistical Parity: A Causal Approach [68.59381759875734]
We propose a new causal decomposition formula for the fairness measures associated with predictive parity.
We show that the notions of statistical and predictive parity are not really mutually exclusive, but complementary and spanning a spectrum of fairness notions.
arXiv Detail & Related papers (2023-06-08T09:23:22Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Pushing the limits of fairness impossibility: Who's the fairest of them
all? [6.396013144017572]
We present a framework that pushes the limits of the impossibility theorem in order to satisfy all three metrics to the best extent possible.
We show experiments demonstrating that our post-processor can improve fairness across the different definitions simultaneously with minimal model performance reduction.
arXiv Detail & Related papers (2022-08-24T22:04:51Z) - Bounding and Approximating Intersectional Fairness through Marginal
Fairness [7.954748673441148]
Discrimination in machine learning often arises along multiple dimensions.
It is desirable to ensure emphintersectional fairness -- i.e., that no subgroup is discriminated against.
arXiv Detail & Related papers (2022-06-12T19:53:34Z) - Beyond Impossibility: Balancing Sufficiency, Separation and Accuracy [27.744055920557024]
Tension between satisfying textitsufficiency and textitseparation
We propose an objective that aims to balance textitsufficiency and textitseparation measures.
We show promising results, where better trade-offs are achieved compared to existing alternatives.
arXiv Detail & Related papers (2022-05-24T19:14:21Z) - Optimising Equal Opportunity Fairness in Model Training [60.0947291284978]
Existing debiasing methods, such as adversarial training and removing protected information from representations, have been shown to reduce bias.
We propose two novel training objectives which directly optimise for the widely-used criterion of it equal opportunity, and show that they are effective in reducing bias while maintaining high performance over two classification tasks.
arXiv Detail & Related papers (2022-05-05T01:57:58Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Fair Representation: Guaranteeing Approximate Multiple Group Fairness
for Unknown Tasks [17.231251035416648]
We study whether fair representation can be used to guarantee fairness for unknown tasks and for multiple fairness notions simultaneously.
We prove that, although fair representation might not guarantee fairness for all prediction tasks, it does guarantee fairness for an important subset of tasks.
arXiv Detail & Related papers (2021-09-01T17:29:11Z) - FADE: FAir Double Ensemble Learning for Observable and Counterfactual
Outcomes [0.0]
Methods for building fair predictors often involve tradeoffs between fairness and accuracy and between different fairness criteria.
We develop a flexible framework for fair ensemble learning that allows users to efficiently explore the fairness-accuracy space.
We show that, surprisingly, multiple unfairness measures can sometimes be minimized simultaneously with little impact on accuracy.
arXiv Detail & Related papers (2021-09-01T03:56:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.