Differential Parity: Relative Fairness Between Two Sets of Decisions
- URL: http://arxiv.org/abs/2112.11279v3
- Date: Fri, 07 Feb 2025 23:00:59 GMT
- Title: Differential Parity: Relative Fairness Between Two Sets of Decisions
- Authors: Zhe Yu, Xiaoyin Xi,
- Abstract summary: We propose to test the relative fairness of one decision set against another with differential parity.
It avoids the ambiguous and contradictory definition of absolutely'' fair decisions.
It reveals the relative preference and bias between two decision sets.
- Score: 4.106941784309168
- License:
- Abstract: With AI systems widely applied to assist human in decision-making processes such as talent hiring, school admission, and loan approval; there is an increasing need to ensure that the decisions made are fair. One major challenge for analyzing fairness in decisions is that the standards are highly subjective and contextual -- there is no consensus for what absolute fairness means for every scenario. Not to say that different fairness standards often conflict with each other. To bypass this issue, this work aims to test relative fairness in decisions. That is, instead of defining what are ``absolutely'' fair decisions, we propose to test the relative fairness of one decision set against another with differential parity -- the difference between two sets of decisions should be independent from a certain sensitive attribute. This proposed differential parity fairness notion has the following benefits: (1) it avoids the ambiguous and contradictory definition of ``absolutely'' fair decisions; (2) it reveals the relative preference and bias between two decision sets; (3) differential parity can serve as a new group fairness notion when a reference set of decisions (ground truths) is provided. One limitation for differential parity is that, it requires the two sets of decisions under comparison to be made on the same data subjects. To overcome this limitation, we propose to utilize a machine learning model to bridge the gap between the two decisions sets made on difference data and estimate the differential parity.
Related papers
- (Un)certainty of (Un)fairness: Preference-Based Selection of Certainly Fair Decision-Makers [0.0]
Fairness metrics are used to assess discrimination and bias in decision-making processes across various domains.
We quantify the uncertainty of the disparity to enhance discrimination assessments.
We define preferences over decision-makers and utilize brute-force to choose the optimal decision-maker.
arXiv Detail & Related papers (2024-09-19T11:44:03Z) - Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - A Justice-Based Framework for the Analysis of Algorithmic
Fairness-Utility Trade-Offs [0.0]
In prediction-based decision-making systems, different perspectives can be at odds.
The short-term business goals of the decision makers are often in conflict with the decision subjects' wish to be treated fairly.
We propose a framework to make these value-laden choices clearly visible.
arXiv Detail & Related papers (2022-06-06T20:31:55Z) - On Fair Selection in the Presence of Implicit and Differential Variance [22.897402186120434]
We study a model where the decision maker receives a noisy estimate of each candidate's quality, whose variance depends on the candidate's group.
We show that both baseline decision makers yield discrimination, although in opposite directions.
arXiv Detail & Related papers (2021-12-10T16:04:13Z) - Legal perspective on possible fairness measures - A legal discussion
using the example of hiring decisions (preprint) [0.0]
We explain the different kinds of fairness concepts that might be applicable for the specific application of hiring decisions.
We analyze their pros and cons with regard to the respective fairness interpretation and evaluate them from a legal perspective.
arXiv Detail & Related papers (2021-08-16T06:41:39Z) - Algorithmic Decision Making with Conditional Fairness [48.76267073341723]
We define conditional fairness as a more sound fairness metric by conditioning on the fairness variables.
We propose a Derivable Conditional Fairness Regularizer (DCFR) to track the trade-off between precision and fairness of algorithmic decision making.
arXiv Detail & Related papers (2020-06-18T12:56:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.