The Impossibility Theorem of Machine Fairness -- A Causal Perspective
- URL: http://arxiv.org/abs/2007.06024v2
- Date: Fri, 29 Jan 2021 22:45:07 GMT
- Title: The Impossibility Theorem of Machine Fairness -- A Causal Perspective
- Authors: Kailash Karthik Saravanakumar
- Abstract summary: There are three prominent metrics of machine fairness used in the community.
It has been shown statistically that it is impossible to satisfy them all at the same time.
- Score: 0.15229257192293202
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the increasing pervasive use of machine learning in social and economic
settings, there has been an interest in the notion of machine bias in the AI
community. Models trained on historic data reflect biases that exist in society
and propagated them to the future through their decisions. There are three
prominent metrics of machine fairness used in the community, and it has been
shown statistically that it is impossible to satisfy them all at the same time.
This has led to an ambiguity with regards to the definition of fairness. In
this report, a causal perspective to the impossibility theorem of fairness is
presented along with a causal goal for machine fairness.
Related papers
- Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Toward A Logical Theory Of Fairness and Bias [12.47276164048813]
We argue for a formal reconstruction of fairness definitions.
We look into three notions: fairness through unawareness, demographic parity and counterfactual fairness.
arXiv Detail & Related papers (2023-06-08T09:18:28Z) - The Flawed Foundations of Fair Machine Learning [0.0]
We show that there is a trade-off between statistically accurate outcomes and group similar outcomes in any data setting where group disparities exist.
We introduce a proof-of-concept evaluation to aid researchers and designers in understanding the relationship between statistically accurate outcomes and group similar outcomes.
arXiv Detail & Related papers (2023-06-02T10:07:12Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - Fairness and Randomness in Machine Learning: Statistical Independence
and Relativization [10.482805367361818]
We dissect the role of statistical independence in fairness and randomness notions regularly used in machine learning.
We argue that randomness and fairness should reflect their nature as modeling assumptions in machine learning.
arXiv Detail & Related papers (2022-07-27T15:55:05Z) - Identifiability of Causal-based Fairness Notions: A State of the Art [4.157415305926584]
Machine learning algorithms can produce biased outcome/prediction, typically, against minorities and under-represented sub-populations.
This paper is a compilation of the major identifiability results which are of particular relevance for machine learning fairness.
arXiv Detail & Related papers (2022-03-11T13:10:32Z) - The zoo of Fairness metrics in Machine Learning [62.997667081978825]
In recent years, the problem of addressing fairness in Machine Learning (ML) and automatic decision-making has attracted a lot of attention.
A plethora of different definitions of fairness in ML have been proposed, that consider different notions of what is a "fair decision" in situations impacting individuals in the population.
In this work, we try to make some order out of this zoo of definitions.
arXiv Detail & Related papers (2021-06-01T13:19:30Z) - Statistical Equity: A Fairness Classification Objective [6.174903055136084]
We propose a new fairness definition motivated by the principle of equity.
We formalize our definition of fairness, and motivate it with its appropriate contexts.
We perform multiple automatic and human evaluations to show the effectiveness of our definition.
arXiv Detail & Related papers (2020-05-14T23:19:38Z) - On Consequentialism and Fairness [64.35872952140677]
We provide a consequentialist critique of common definitions of fairness within machine learning.
We conclude with a broader discussion of the issues of learning and randomization.
arXiv Detail & Related papers (2020-01-02T05:39:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.