Testing Relative Fairness in Human Decisions With Machine Learning
- URL: http://arxiv.org/abs/2112.11279v2
- Date: Sun, 17 Dec 2023 20:06:15 GMT
- Title: Testing Relative Fairness in Human Decisions With Machine Learning
- Authors: Zhe Yu, Xiaoyin Xi
- Abstract summary: This work aims to test relative fairness in human decisions.
Instead of defining what are "absolute" fair decisions, we check the relative fairness of one decision set against another.
We show that a machine learning model trained on the human decisions can inherit the bias/preference.
- Score: 4.8518076650315045
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fairness in decision-making has been a long-standing issue in our society.
Compared to algorithmic fairness, fairness in human decisions is even more
important since there are processes where humans make the final decisions and
that machine learning models inherit bias from the human decisions they were
trained on. However, the standard for fairness in human decisions are highly
subjective and contextual. This leads to the difficulty for testing "absolute"
fairness in human decisions. To bypass this issue, this work aims to test
relative fairness in human decisions. That is, instead of defining what are
"absolute" fair decisions, we check the relative fairness of one decision set
against another. An example outcome can be: Decision Set A favors female over
male more than Decision Set B. Such relative fairness has the following
benefits: (1) it avoids the ambiguous and contradictory definition of
"absolute" fair decisions; (2) it reveals the relative preference and bias
between different human decisions; (3) if a reference set of decisions is
provided, relative fairness of other decision sets against this reference set
can reflect whether those decision sets are fair by the standard of that
reference set. We define the relative fairness with statistical tests (null
hypothesis and effect size tests) of the decision differences across each
sensitive group. Furthermore, we show that a machine learning model trained on
the human decisions can inherit the bias/preference and therefore can be
utilized to estimate the relative fairness between two decision sets made on
different data.
Related papers
- (Un)certainty of (Un)fairness: Preference-Based Selection of Certainly Fair Decision-Makers [0.0]
Fairness metrics are used to assess discrimination and bias in decision-making processes across various domains.
We quantify the uncertainty of the disparity to enhance discrimination assessments.
We define preferences over decision-makers and utilize brute-force to choose the optimal decision-maker.
arXiv Detail & Related papers (2024-09-19T11:44:03Z) - Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - A Justice-Based Framework for the Analysis of Algorithmic
Fairness-Utility Trade-Offs [0.0]
In prediction-based decision-making systems, different perspectives can be at odds.
The short-term business goals of the decision makers are often in conflict with the decision subjects' wish to be treated fairly.
We propose a framework to make these value-laden choices clearly visible.
arXiv Detail & Related papers (2022-06-06T20:31:55Z) - On Fair Selection in the Presence of Implicit and Differential Variance [22.897402186120434]
We study a model where the decision maker receives a noisy estimate of each candidate's quality, whose variance depends on the candidate's group.
We show that both baseline decision makers yield discrimination, although in opposite directions.
arXiv Detail & Related papers (2021-12-10T16:04:13Z) - Legal perspective on possible fairness measures - A legal discussion
using the example of hiring decisions (preprint) [0.0]
We explain the different kinds of fairness concepts that might be applicable for the specific application of hiring decisions.
We analyze their pros and cons with regard to the respective fairness interpretation and evaluate them from a legal perspective.
arXiv Detail & Related papers (2021-08-16T06:41:39Z) - Algorithmic Decision Making with Conditional Fairness [48.76267073341723]
We define conditional fairness as a more sound fairness metric by conditioning on the fairness variables.
We propose a Derivable Conditional Fairness Regularizer (DCFR) to track the trade-off between precision and fairness of algorithmic decision making.
arXiv Detail & Related papers (2020-06-18T12:56:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.