Pushing the limits of fairness impossibility: Who's the fairest of them
all?
- URL: http://arxiv.org/abs/2208.12606v1
- Date: Wed, 24 Aug 2022 22:04:51 GMT
- Title: Pushing the limits of fairness impossibility: Who's the fairest of them
all?
- Authors: Brian Hsu, Rahul Mazumder, Preetam Nandy, Kinjal Basu
- Abstract summary: We present a framework that pushes the limits of the impossibility theorem in order to satisfy all three metrics to the best extent possible.
We show experiments demonstrating that our post-processor can improve fairness across the different definitions simultaneously with minimal model performance reduction.
- Score: 6.396013144017572
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The impossibility theorem of fairness is a foundational result in the
algorithmic fairness literature. It states that outside of special cases, one
cannot exactly and simultaneously satisfy all three common and intuitive
definitions of fairness - demographic parity, equalized odds, and predictive
rate parity. This result has driven most works to focus on solutions for one or
two of the metrics. Rather than follow suit, in this paper we present a
framework that pushes the limits of the impossibility theorem in order to
satisfy all three metrics to the best extent possible. We develop an
integer-programming based approach that can yield a certifiably optimal
post-processing method for simultaneously satisfying multiple fairness criteria
under small violations. We show experiments demonstrating that our
post-processor can improve fairness across the different definitions
simultaneously with minimal model performance reduction. We also discuss
applications of our framework for model selection and fairness explainability,
thereby attempting to answer the question: who's the fairest of them all?
Related papers
- Near-Optimal Solutions of Constrained Learning Problems [85.48853063302764]
In machine learning systems, the need to curtail their behavior has become increasingly apparent.
This is evidenced by recent advancements towards developing models that satisfy dual robustness variables.
Our results show that rich parametrizations effectively mitigate non-dimensional, finite learning problems.
arXiv Detail & Related papers (2024-03-18T14:55:45Z) - Arbitrariness Lies Beyond the Fairness-Accuracy Frontier [3.383670923637875]
We show that state-of-the-art fairness interventions can mask high predictive multiplicity behind favorable group fairness and accuracy metrics.
We propose an ensemble algorithm applicable to any fairness intervention that provably ensures more consistent predictions.
arXiv Detail & Related papers (2023-06-15T18:15:46Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - The Possibility of Fairness: Revisiting the Impossibility Theorem in
Practice [5.175941513195566]
We show that it is possible to identify a large set of models that satisfy seemingly incompatible fairness constraints.
We offer tools and guidance for practitioners to understand when -- and to what degree -- fairness along multiple criteria can be achieved.
arXiv Detail & Related papers (2023-02-13T13:29:24Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - Enforcing Group Fairness in Algorithmic Decision Making: Utility
Maximization Under Sufficiency [0.0]
This paper focuses on the fairness concepts of PPV parity, false omission rate (FOR) parity, and sufficiency.
We show that group-specific threshold rules are optimal for PPV parity and FOR parity.
We also provide a solution for the optimal decision rules satisfying the fairness constraint sufficiency.
arXiv Detail & Related papers (2022-06-05T18:47:34Z) - Pluralistic Image Completion with Probabilistic Mixture-of-Experts [58.81469985455467]
We introduce a unified probabilistic graph model that represents the complex interactions in image completion.
The entire procedure of image completion is then mathematically divided into several sub-procedures, which helps efficient enforcement of constraints.
The inherent parameters of GMM are task-related, which are optimized adaptively during training, while the number of its primitives can control the diversity of results conveniently.
arXiv Detail & Related papers (2022-05-18T17:24:21Z) - Fairness Through Counterfactual Utilities [0.0]
Group fairness definitions such as Demographic Parity and Equal Opportunity make assumptions about the underlying decision-problem that restrict them to classification problems.
We provide a generalized set of group fairness definitions that unambiguously extend to all machine learning environments.
arXiv Detail & Related papers (2021-08-11T16:51:27Z) - Beyond Individual and Group Fairness [90.4666341812857]
We present a new data-driven model of fairness that is guided by the unfairness complaints received by the system.
Our model supports multiple fairness criteria and takes into account their potential incompatibilities.
arXiv Detail & Related papers (2020-08-21T14:14:44Z) - Learning Individually Fair Classifier with Path-Specific Causal-Effect
Constraint [31.86959207229775]
In this paper, we propose a framework for learning an individually fair classifier.
We define the it probability of individual unfairness (PIU) and solve an optimization problem where PIU's upper bound, which can be estimated from data, is controlled to be close to zero.
Experimental results show that our method can learn an individually fair classifier at a slight cost of accuracy.
arXiv Detail & Related papers (2020-02-17T02:46:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.