What-is and How-to for Fairness in Machine Learning: A Survey,
Reflection, and Perspective
- URL: http://arxiv.org/abs/2206.04101v2
- Date: Fri, 2 Jun 2023 06:04:56 GMT
- Title: What-is and How-to for Fairness in Machine Learning: A Survey,
Reflection, and Perspective
- Authors: Zeyu Tang, Jiji Zhang, Kun Zhang
- Abstract summary: We review and reflect on various fairness notions previously proposed in machine learning literature.
We also consider the long-term impact that is induced by current prediction and decision.
This paper demonstrates the importance of matching the mission (which kind of fairness one would like to enforce) and the means (which spectrum of fairness analysis is of interest) to fulfill the intended purpose.
- Score: 13.124434298120494
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Algorithmic fairness has attracted increasing attention in the machine
learning community. Various definitions are proposed in the literature, but the
differences and connections among them are not clearly addressed. In this
paper, we review and reflect on various fairness notions previously proposed in
machine learning literature, and make an attempt to draw connections to
arguments in moral and political philosophy, especially theories of justice. We
also consider fairness inquiries from a dynamic perspective, and further
consider the long-term impact that is induced by current prediction and
decision. In light of the differences in the characterized fairness, we present
a flowchart that encompasses implicit assumptions and expected outcomes of
different types of fairness inquiries on the data generating process, on the
predicted outcome, and on the induced impact, respectively. This paper
demonstrates the importance of matching the mission (which kind of fairness one
would like to enforce) and the means (which spectrum of fairness analysis is of
interest, what is the appropriate analyzing scheme) to fulfill the intended
purpose.
Related papers
- Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - On Prediction-Modelers and Decision-Makers: Why Fairness Requires More
Than a Fair Prediction Model [1.3996171129586732]
An implicit ambiguity in the field of prediction-based decision-making regards the relation between the concepts of prediction and decision.
We show the different ways in which these two elements influence the final fairness properties of a prediction-based decision system.
We propose a framework that enables a better understanding and reasoning of the conceptual logic of creating fairness in prediction-based decision-making.
arXiv Detail & Related papers (2023-10-09T10:34:42Z) - Reconciling Predictive and Statistical Parity: A Causal Approach [68.59381759875734]
We propose a new causal decomposition formula for the fairness measures associated with predictive parity.
We show that the notions of statistical and predictive parity are not really mutually exclusive, but complementary and spanning a spectrum of fairness notions.
arXiv Detail & Related papers (2023-06-08T09:23:22Z) - Survey on Fairness Notions and Related Tensions [4.257210316104905]
Automated decision systems are increasingly used to take consequential decisions in problems such as job hiring and loan granting.
However, objective machine learning (ML) algorithms are prone to bias, which results in yet unfair decisions.
This paper surveys the commonly used fairness notions and discusses the tensions among them with privacy and accuracy.
arXiv Detail & Related papers (2022-09-16T13:36:05Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Promises and Challenges of Causality for Ethical Machine Learning [2.1946447418179664]
We lay out the conditions for appropriate application of causal fairness under the "potential outcomes framework"
We highlight key aspects of causal inference that are often ignored in the causal fairness literature.
We argue that such conceptualization of the intervention is key in evaluating the validity of causal assumptions.
arXiv Detail & Related papers (2022-01-26T00:04:10Z) - The zoo of Fairness metrics in Machine Learning [62.997667081978825]
In recent years, the problem of addressing fairness in Machine Learning (ML) and automatic decision-making has attracted a lot of attention.
A plethora of different definitions of fairness in ML have been proposed, that consider different notions of what is a "fair decision" in situations impacting individuals in the population.
In this work, we try to make some order out of this zoo of definitions.
arXiv Detail & Related papers (2021-06-01T13:19:30Z) - Who Gets What, According to Whom? An Analysis of Fairness Perceptions in
Service Allocation [2.69180747382622]
We experimentally explore five novel research questions at the intersection of the "Who," "What," and "How" of fairness perceptions.
Our results suggest that the "Who" and "What," at least, matter in ways that are 1) not easily explained by any one theoretical perspective, 2) have critical implications for how perceptions of fairness should be measured and/or integrated into algorithmic decision-making systems.
arXiv Detail & Related papers (2021-05-10T15:31:22Z) - Machine learning fairness notions: Bridging the gap with real-world
applications [4.157415305926584]
Fairness emerged as an important requirement to guarantee that Machine Learning predictive systems do not discriminate against specific individuals or entire sub-populations.
This paper is a survey that illustrates the subtleties between fairness notions through a large number of examples and scenarios.
arXiv Detail & Related papers (2020-06-30T13:01:06Z) - On Consequentialism and Fairness [64.35872952140677]
We provide a consequentialist critique of common definitions of fairness within machine learning.
We conclude with a broader discussion of the issues of learning and randomization.
arXiv Detail & Related papers (2020-01-02T05:39:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.