Testing Relative Fairness in Human Decisions With Machine Learning
- URL: http://arxiv.org/abs/2112.11279v2
- Date: Sun, 17 Dec 2023 20:06:15 GMT
- Title: Testing Relative Fairness in Human Decisions With Machine Learning
- Authors: Zhe Yu, Xiaoyin Xi
- Abstract summary: This work aims to test relative fairness in human decisions.
Instead of defining what are "absolute" fair decisions, we check the relative fairness of one decision set against another.
We show that a machine learning model trained on the human decisions can inherit the bias/preference.
- Score: 4.8518076650315045
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fairness in decision-making has been a long-standing issue in our society.
Compared to algorithmic fairness, fairness in human decisions is even more
important since there are processes where humans make the final decisions and
that machine learning models inherit bias from the human decisions they were
trained on. However, the standard for fairness in human decisions are highly
subjective and contextual. This leads to the difficulty for testing "absolute"
fairness in human decisions. To bypass this issue, this work aims to test
relative fairness in human decisions. That is, instead of defining what are
"absolute" fair decisions, we check the relative fairness of one decision set
against another. An example outcome can be: Decision Set A favors female over
male more than Decision Set B. Such relative fairness has the following
benefits: (1) it avoids the ambiguous and contradictory definition of
"absolute" fair decisions; (2) it reveals the relative preference and bias
between different human decisions; (3) if a reference set of decisions is
provided, relative fairness of other decision sets against this reference set
can reflect whether those decision sets are fair by the standard of that
reference set. We define the relative fairness with statistical tests (null
hypothesis and effect size tests) of the decision differences across each
sensitive group. Furthermore, we show that a machine learning model trained on
the human decisions can inherit the bias/preference and therefore can be
utilized to estimate the relative fairness between two decision sets made on
different data.
Related papers
- (Un)certainty of (Un)fairness: Preference-Based Selection of Certainly Fair Decision-Makers [0.0]
Fairness metrics are used to assess discrimination and bias in decision-making processes across various domains.
We quantify the uncertainty of the disparity to enhance discrimination assessments.
We define preferences over decision-makers and utilize brute-force to choose the optimal decision-maker.
arXiv Detail & Related papers (2024-09-19T11:44:03Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - A Justice-Based Framework for the Analysis of Algorithmic
Fairness-Utility Trade-Offs [0.0]
In prediction-based decision-making systems, different perspectives can be at odds.
The short-term business goals of the decision makers are often in conflict with the decision subjects' wish to be treated fairly.
We propose a framework to make these value-laden choices clearly visible.
arXiv Detail & Related papers (2022-06-06T20:31:55Z) - Randomized Classifiers vs Human Decision-Makers: Trustworthy AI May Have
to Act Randomly and Society Seems to Accept This [0.8889304968879161]
We feel that akin to human decisions, judgments of artificial agents should necessarily be grounded in some moral principles.
Yet a decision-maker can only make truly ethical (based on any ethical theory) and fair (according to any notion of fairness) decisions if full information on all the relevant factors on which the decision is based are available at the time of decision-making.
arXiv Detail & Related papers (2021-11-15T05:39:02Z) - Legal perspective on possible fairness measures - A legal discussion
using the example of hiring decisions (preprint) [0.0]
We explain the different kinds of fairness concepts that might be applicable for the specific application of hiring decisions.
We analyze their pros and cons with regard to the respective fairness interpretation and evaluate them from a legal perspective.
arXiv Detail & Related papers (2021-08-16T06:41:39Z) - Learning the Preferences of Uncertain Humans with Inverse Decision
Theory [10.926992035470372]
We study the setting of inverse decision theory (IDT), a framework where a human is observed making non-sequential binary decisions under uncertainty.
In IDT, the human's preferences are conveyed through their loss function, which expresses a tradeoff between different types of mistakes.
We show that it is actually easier to identify preferences when the decision problem is more uncertain.
arXiv Detail & Related papers (2021-06-19T00:11:13Z) - The zoo of Fairness metrics in Machine Learning [62.997667081978825]
In recent years, the problem of addressing fairness in Machine Learning (ML) and automatic decision-making has attracted a lot of attention.
A plethora of different definitions of fairness in ML have been proposed, that consider different notions of what is a "fair decision" in situations impacting individuals in the population.
In this work, we try to make some order out of this zoo of definitions.
arXiv Detail & Related papers (2021-06-01T13:19:30Z) - Indecision Modeling [50.00689136829134]
It is important that AI systems act in ways which align with human values.
People are often indecisive, and especially so when their decision has moral implications.
arXiv Detail & Related papers (2020-12-15T18:32:37Z) - Algorithmic Decision Making with Conditional Fairness [48.76267073341723]
We define conditional fairness as a more sound fairness metric by conditioning on the fairness variables.
We propose a Derivable Conditional Fairness Regularizer (DCFR) to track the trade-off between precision and fairness of algorithmic decision making.
arXiv Detail & Related papers (2020-06-18T12:56:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.