Maximal Fairness
- URL: http://arxiv.org/abs/2304.06057v1
- Date: Wed, 12 Apr 2023 12:28:44 GMT
- Title: Maximal Fairness
- Authors: MaryBeth Defrance and Tijl De Bie
- Abstract summary: "Impossibility Theorem" states that satisfying a certain combination of fairness measures is impossible.
This work identifies maximal sets of commonly used fairness measures that can be simultaneously satisfied.
In total 12 maximal sets of these fairness measures are possible, among which seven combinations of two measures, and five combinations of three measures.
- Score: 13.542616958246725
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fairness in AI has garnered quite some attention in research, and
increasingly also in society. The so-called "Impossibility Theorem" has been
one of the more striking research results with both theoretical and practical
consequences, as it states that satisfying a certain combination of fairness
measures is impossible. To date, this negative result has not yet been
complemented with a positive one: a characterization of which combinations of
fairness notions are possible. This work aims to fill this gap by identifying
maximal sets of commonly used fairness measures that can be simultaneously
satisfied. The fairness measures used are demographic parity, equal
opportunity, false positive parity, predictive parity, predictive equality,
overall accuracy equality and treatment equality. We conclude that in total 12
maximal sets of these fairness measures are possible, among which seven
combinations of two measures, and five combinations of three measures. Our work
raises interest questions regarding the practical relevance of each of these 12
maximal fairness notions in various scenarios.
Related papers
- Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Reconciling Predictive and Statistical Parity: A Causal Approach [68.59381759875734]
We propose a new causal decomposition formula for the fairness measures associated with predictive parity.
We show that the notions of statistical and predictive parity are not really mutually exclusive, but complementary and spanning a spectrum of fairness notions.
arXiv Detail & Related papers (2023-06-08T09:23:22Z) - The Possibility of Fairness: Revisiting the Impossibility Theorem in
Practice [5.175941513195566]
We show that it is possible to identify a large set of models that satisfy seemingly incompatible fairness constraints.
We offer tools and guidance for practitioners to understand when -- and to what degree -- fairness along multiple criteria can be achieved.
arXiv Detail & Related papers (2023-02-13T13:29:24Z) - Increasing Fairness via Combination with Learning Guarantees [8.314000998551865]
We propose a fairness quality measure named discriminative risk to reflect both individual and group fairness aspects.
We also propose first- and second-order oracle bounds to show that fairness can be boosted via ensemble combination with theoretical learning guarantees.
arXiv Detail & Related papers (2023-01-25T20:31:06Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Pushing the limits of fairness impossibility: Who's the fairest of them
all? [6.396013144017572]
We present a framework that pushes the limits of the impossibility theorem in order to satisfy all three metrics to the best extent possible.
We show experiments demonstrating that our post-processor can improve fairness across the different definitions simultaneously with minimal model performance reduction.
arXiv Detail & Related papers (2022-08-24T22:04:51Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Cascaded Debiasing: Studying the Cumulative Effect of Multiple
Fairness-Enhancing Interventions [48.98659895355356]
This paper investigates the cumulative effect of multiple fairness enhancing interventions at different stages of the machine learning (ML) pipeline.
Applying multiple interventions results in better fairness and lower utility than individual interventions on aggregate.
On the downside, fairness-enhancing interventions can negatively impact different population groups, especially the privileged group.
arXiv Detail & Related papers (2022-02-08T09:20:58Z) - Gradual (In)Compatibility of Fairness Criteria [0.0]
Impossibility results show that important fairness measures cannot be satisfied at the same time under reasonable assumptions.
This paper explores whether we can satisfy and/or improve these fairness measures simultaneously to a certain degree.
arXiv Detail & Related papers (2021-09-09T16:37:30Z) - Emergent Unfairness in Algorithmic Fairness-Accuracy Trade-Off Research [2.6397379133308214]
We argue that such assumptions, which are often left implicit and unexamined, lead to inconsistent conclusions.
While the intended goal of this work may be to improve the fairness of machine learning models, these unexamined, implicit assumptions can in fact result in emergent unfairness.
arXiv Detail & Related papers (2021-02-01T22:02:14Z) - Discrimination of POVMs with rank-one effects [62.997667081978825]
This work provides an insight into the problem of discrimination of positive operator valued measures with rank-one effects.
We compare two possible discrimination schemes: the parallel and adaptive ones.
We provide an explicit algorithm which allows us to find this adaptive scheme.
arXiv Detail & Related papers (2020-02-13T11:34:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.