The Flawed Foundations of Fair Machine Learning
- URL: http://arxiv.org/abs/2306.01417v1
- Date: Fri, 2 Jun 2023 10:07:12 GMT
- Title: The Flawed Foundations of Fair Machine Learning
- Authors: Robert Lee Poe and Soumia Zohra El Mestari
- Abstract summary: We show that there is a trade-off between statistically accurate outcomes and group similar outcomes in any data setting where group disparities exist.
We introduce a proof-of-concept evaluation to aid researchers and designers in understanding the relationship between statistically accurate outcomes and group similar outcomes.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The definition and implementation of fairness in automated decisions has been
extensively studied by the research community. Yet, there hides fallacious
reasoning, misleading assertions, and questionable practices at the foundations
of the current fair machine learning paradigm. Those flaws are the result of a
failure to understand that the trade-off between statistically accurate
outcomes and group similar outcomes exists as independent, external constraint
rather than as a subjective manifestation as has been commonly argued. First,
we explain that there is only one conception of fairness present in the fair
machine learning literature: group similarity of outcomes based on a sensitive
attribute where the similarity benefits an underprivileged group. Second, we
show that there is, in fact, a trade-off between statistically accurate
outcomes and group similar outcomes in any data setting where group disparities
exist, and that the trade-off presents an existential threat to the equitable,
fair machine learning approach. Third, we introduce a proof-of-concept
evaluation to aid researchers and designers in understanding the relationship
between statistically accurate outcomes and group similar outcomes. Finally,
suggestions for future work aimed at data scientists, legal scholars, and data
ethicists that utilize the conceptual and experimental framework described
throughout this article are provided.
Related papers
- Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Reconciling Predictive and Statistical Parity: A Causal Approach [68.59381759875734]
We propose a new causal decomposition formula for the fairness measures associated with predictive parity.
We show that the notions of statistical and predictive parity are not really mutually exclusive, but complementary and spanning a spectrum of fairness notions.
arXiv Detail & Related papers (2023-06-08T09:23:22Z) - Counterpart Fairness -- Addressing Systematic between-group Differences in Fairness Evaluation [17.495053606192375]
It is critical to ensure that an algorithmic decision is fair and does not discriminate against specific individuals/groups.
Existing group fairness methods aim to ensure equal outcomes across groups delineated by protected variables like race or gender.
The confounding factors, which are non-protected variables but manifest systematic differences, can significantly affect fairness evaluation.
arXiv Detail & Related papers (2023-05-29T15:41:12Z) - Counterfactually Fair Regression with Double Machine Learning [0.0]
This paper proposes Double Machine Learning (DML) Fairness.
It analogises this problem of counterfactual fairness in regression problems to that of estimating counterfactual outcomes in causal inference.
It demonstrates the approach in a simulation study pertaining to discrimination in workplace hiring and an application on real data estimating the GPAs of law school students.
arXiv Detail & Related papers (2023-03-21T01:28:23Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Error Parity Fairness: Testing for Group Fairness in Regression Tasks [5.076419064097733]
This work presents error parity as a regression fairness notion and introduces a testing methodology to assess group fairness.
It is followed by a suitable permutation test to compare groups on several statistics to explore disparities and identify impacted groups.
Overall, the proposed regression fairness testing methodology fills a gap in the fair machine learning literature and may serve as a part of larger accountability assessments and algorithm audits.
arXiv Detail & Related papers (2022-08-16T17:47:20Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Fairness Through Counterfactual Utilities [0.0]
Group fairness definitions such as Demographic Parity and Equal Opportunity make assumptions about the underlying decision-problem that restrict them to classification problems.
We provide a generalized set of group fairness definitions that unambiguously extend to all machine learning environments.
arXiv Detail & Related papers (2021-08-11T16:51:27Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.