Abstracting Fairness: Oracles, Metrics, and Interpretability
- URL: http://arxiv.org/abs/2004.01840v1
- Date: Sat, 4 Apr 2020 03:14:53 GMT
- Title: Abstracting Fairness: Oracles, Metrics, and Interpretability
- Authors: Cynthia Dwork, Christina Ilvento, Guy N. Rothblum, Pragya Sur
- Abstract summary: We examine what can be learned from a fairness oracle equipped with an underlying understanding of true'' fairness.
Our results have implications for interpretablity -- a highly desired but poorly defined property of classification systems.
- Score: 21.59432019966861
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is well understood that classification algorithms, for example, for
deciding on loan applications, cannot be evaluated for fairness without taking
context into account. We examine what can be learned from a fairness oracle
equipped with an underlying understanding of ``true'' fairness. The oracle
takes as input a (context, classifier) pair satisfying an arbitrary fairness
definition, and accepts or rejects the pair according to whether the classifier
satisfies the underlying fairness truth. Our principal conceptual result is an
extraction procedure that learns the underlying truth; moreover, the procedure
can learn an approximation to this truth given access to a weak form of the
oracle. Since every ``truly fair'' classifier induces a coarse metric, in which
those receiving the same decision are at distance zero from one another and
those receiving different decisions are at distance one, this extraction
process provides the basis for ensuring a rough form of metric fairness, also
known as individual fairness. Our principal technical result is a higher
fidelity extractor under a mild technical constraint on the weak oracle's
conception of fairness. Our framework permits the scenario in which many
classifiers, with differing outcomes, may all be considered fair. Our results
have implications for interpretablity -- a highly desired but poorly defined
property of classification systems that endeavors to permit a human arbiter to
reject classifiers deemed to be ``unfair'' or illegitimately derived.
Related papers
- Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - Fairness and Unfairness in Binary and Multiclass Classification: Quantifying, Calculating, and Bounding [22.449347663780767]
We propose a new interpretable measure of unfairness, that allows providing a quantitative analysis of classifier fairness.
We show how this measure can be calculated when the classifier's conditional confusion matrices are known.
We report experiments on data sets representing diverse applications.
arXiv Detail & Related papers (2022-06-07T12:26:28Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Fairness Through Counterfactual Utilities [0.0]
Group fairness definitions such as Demographic Parity and Equal Opportunity make assumptions about the underlying decision-problem that restrict them to classification problems.
We provide a generalized set of group fairness definitions that unambiguously extend to all machine learning environments.
arXiv Detail & Related papers (2021-08-11T16:51:27Z) - Everything is Relative: Understanding Fairness with Optimal Transport [1.160208922584163]
We present an optimal transport-based approach to fairness that offers an interpretable and quantifiable exploration of bias and its structure.
Our framework is able to recover well known examples of algorithmic discrimination, detect unfairness when other metrics fail, and explore recourse opportunities.
arXiv Detail & Related papers (2021-02-20T13:57:53Z) - On the Fairness of Causal Algorithmic Recourse [36.519629650529666]
We propose two new fairness criteria at the group and individual level.
We show that fairness of recourse is complementary to fairness of prediction.
We discuss whether fairness violations in the data generating process revealed by our criteria may be better addressed by societal interventions.
arXiv Detail & Related papers (2020-10-13T16:35:06Z) - Two Simple Ways to Learn Individual Fairness Metrics from Data [47.6390279192406]
Individual fairness is an intuitive definition of algorithmic fairness that addresses some of the drawbacks of group fairness.
The lack of a widely accepted fair metric for many ML tasks is the main barrier to broader adoption of individual fairness.
We show empirically that fair training with the learned metrics leads to improved fairness on three machine learning tasks susceptible to gender and racial biases.
arXiv Detail & Related papers (2020-06-19T23:47:15Z) - Statistical Equity: A Fairness Classification Objective [6.174903055136084]
We propose a new fairness definition motivated by the principle of equity.
We formalize our definition of fairness, and motivate it with its appropriate contexts.
We perform multiple automatic and human evaluations to show the effectiveness of our definition.
arXiv Detail & Related papers (2020-05-14T23:19:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.