The Measure and Mismeasure of Fairness
- URL: http://arxiv.org/abs/1808.00023v3
- Date: Mon, 14 Aug 2023 19:14:00 GMT
- Title: The Measure and Mismeasure of Fairness
- Authors: Sam Corbett-Davies, Johann D. Gaebler, Hamed Nilforoshan, Ravi Shroff,
and Sharad Goel
- Abstract summary: We argue that the equitable design of algorithms requires grappling with their context-specific consequences.
We offer strategies to ensure algorithms are better aligned with policy goals.
- Score: 6.6697126372463345
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The field of fair machine learning aims to ensure that decisions guided by
algorithms are equitable. Over the last decade, several formal, mathematical
definitions of fairness have gained prominence. Here we first assemble and
categorize these definitions into two broad families: (1) those that constrain
the effects of decisions on disparities; and (2) those that constrain the
effects of legally protected characteristics, like race and gender, on
decisions. We then show, analytically and empirically, that both families of
definitions typically result in strongly Pareto dominated decision policies.
For example, in the case of college admissions, adhering to popular formal
conceptions of fairness would simultaneously result in lower student-body
diversity and a less academically prepared class, relative to what one could
achieve by explicitly tailoring admissions policies to achieve desired
outcomes. In this sense, requiring that these fairness definitions hold can,
perversely, harm the very groups they were designed to protect. In contrast to
axiomatic notions of fairness, we argue that the equitable design of algorithms
requires grappling with their context-specific consequences, akin to the
equitable design of policy. We conclude by listing several open challenges in
fair machine learning and offering strategies to ensure algorithms are better
aligned with policy goals.
Related papers
- Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Measuring Equality in Machine Learning Security Defenses: A Case Study
in Speech Recognition [56.69875958980474]
This work considers approaches to defending learned systems and how security defenses result in performance inequities across different sub-populations.
We find that many methods that have been proposed can cause direct harm, like false rejection and unequal benefits from robustness training.
We present a comparison of equality between two rejection-based defenses: randomized smoothing and neural rejection, finding randomized smoothing more equitable due to the sampling mechanism for minority groups.
arXiv Detail & Related papers (2023-02-17T16:19:26Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - Causal Conceptions of Fairness and their Consequences [1.9006392177894293]
We show that two families of causal definitions of algorithmic fairness result in strongly dominated decision policies.
We prove the resulting policies require admitting all students with the same probability, regardless of academic qualifications or group membership.
arXiv Detail & Related papers (2022-07-12T04:26:26Z) - Legal perspective on possible fairness measures - A legal discussion
using the example of hiring decisions (preprint) [0.0]
We explain the different kinds of fairness concepts that might be applicable for the specific application of hiring decisions.
We analyze their pros and cons with regard to the respective fairness interpretation and evaluate them from a legal perspective.
arXiv Detail & Related papers (2021-08-16T06:41:39Z) - Fairness Through Counterfactual Utilities [0.0]
Group fairness definitions such as Demographic Parity and Equal Opportunity make assumptions about the underlying decision-problem that restrict them to classification problems.
We provide a generalized set of group fairness definitions that unambiguously extend to all machine learning environments.
arXiv Detail & Related papers (2021-08-11T16:51:27Z) - Fairness for Image Generation with Uncertain Sensitive Attributes [97.81354305427871]
This work tackles the issue of fairness in the context of generative procedures, such as image super-resolution.
While traditional group fairness definitions are typically defined with respect to specified protected groups, we emphasize that there are no ground truth identities.
We show that the natural extension of demographic parity is strongly dependent on the grouping, and emphimpossible to achieve obliviously.
arXiv Detail & Related papers (2021-06-23T06:17:17Z) - Inherent Trade-offs in the Fair Allocation of Treatments [2.6143568807090696]
Explicit and implicit bias clouds human judgement, leading to discriminatory treatment of minority groups.
We propose a causal framework that learns optimal intervention policies from data subject to fairness constraints.
arXiv Detail & Related papers (2020-10-30T17:55:00Z) - Algorithmic Decision Making with Conditional Fairness [48.76267073341723]
We define conditional fairness as a more sound fairness metric by conditioning on the fairness variables.
We propose a Derivable Conditional Fairness Regularizer (DCFR) to track the trade-off between precision and fairness of algorithmic decision making.
arXiv Detail & Related papers (2020-06-18T12:56:28Z) - Balancing Competing Objectives with Noisy Data: Score-Based Classifiers
for Welfare-Aware Machine Learning [43.518329314620416]
We study algorithmic policies which explicitly trade off between a private objective (such as profit) and a public objective (such as social welfare)
Our results shed light on inherent trade-offs in using machine learning for decisions that impact social welfare.
arXiv Detail & Related papers (2020-03-15T02:49:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.