Non-Comparative Fairness for Human-Auditing and Its Relation to
Traditional Fairness Notions
- URL: http://arxiv.org/abs/2107.01277v1
- Date: Tue, 29 Jun 2021 20:05:22 GMT
- Title: Non-Comparative Fairness for Human-Auditing and Its Relation to
Traditional Fairness Notions
- Authors: Mukund Telukunta, Venkata Sriram Siddhardh Nadendla
- Abstract summary: This paper proposes a new fairness notion based on the principle of non-comparative justice.
We show that any MLS can be deemed fair from the perspective of comparative fairness.
We also show that the converse holds true in the context of individual fairness.
- Score: 1.8275108630751837
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bias evaluation in machine-learning based services (MLS) based on traditional
algorithmic fairness notions that rely on comparative principles is practically
difficult, making it necessary to rely on human auditor feedback. However, in
spite of taking rigorous training on various comparative fairness notions,
human auditors are known to disagree on various aspects of fairness notions in
practice, making it difficult to collect reliable feedback. This paper offers a
paradigm shift to the domain of algorithmic fairness via proposing a new
fairness notion based on the principle of non-comparative justice. In contrary
to traditional fairness notions where the outcomes of two individuals/groups
are compared, our proposed notion compares the MLS' outcome with a desired
outcome for each input. This desired outcome naturally describes a human
auditor's expectation, and can be easily used to evaluate MLS on crowd-auditing
platforms. We show that any MLS can be deemed fair from the perspective of
comparative fairness (be it in terms of individual fairness, statistical
parity, equal opportunity or calibration) if it is non-comparatively fair with
respect to a fair auditor. We also show that the converse holds true in the
context of individual fairness. Given that such an evaluation relies on the
trustworthiness of the auditor, we also present an approach to identify fair
and reliable auditors by estimating their biases with respect to a given set of
sensitive attributes, as well as quantify the uncertainty in the estimation of
biases within a given MLS. Furthermore, all of the above results are also
validated on COMPAS, German credit and Adult Census Income datasets.
Related papers
- Fairness Evaluation with Item Response Theory [10.871079276188649]
This paper proposes a novel Fair-IRT framework to evaluate fairness in Machine Learning (ML) models.
Detailed explanations for item characteristic curves (ICCs) are provided for particular individuals.
Experiments demonstrate the effectiveness of this framework as a fairness evaluation tool.
arXiv Detail & Related papers (2024-10-20T22:25:20Z) - Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - A Normative Framework for Benchmarking Consumer Fairness in Large Language Model Recommender System [9.470545149911072]
This paper proposes a normative framework to benchmark consumer fairness in LLM-powered recommender systems.
We argue that this gap can lead to arbitrary conclusions about fairness.
Experiments on the MovieLens dataset on consumer fairness reveal fairness deviations in age-based recommendations.
arXiv Detail & Related papers (2024-05-03T16:25:27Z) - Fairness in Ranking under Disparate Uncertainty [24.401219403555814]
We argue that ranking can introduce unfairness if the uncertainty of the underlying relevance model differs between groups of options.
We propose Equal-Opportunity Ranking (EOR) as a new fairness criterion for ranking.
We show that EOR corresponds to a group-wise fair lottery among the relevant options even in the presence of disparate uncertainty.
arXiv Detail & Related papers (2023-09-04T13:49:48Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Evaluate Confidence Instead of Perplexity for Zero-shot Commonsense
Reasoning [85.1541170468617]
This paper reconsiders the nature of commonsense reasoning and proposes a novel commonsense reasoning metric, Non-Replacement Confidence (NRC)
Our proposed novel method boosts zero-shot performance on two commonsense reasoning benchmark datasets and further seven commonsense question-answering datasets.
arXiv Detail & Related papers (2022-08-23T14:42:14Z) - Algorithmic Fairness and Vertical Equity: Income Fairness with IRS Tax
Audit Models [73.24381010980606]
This study examines issues of algorithmic fairness in the context of systems that inform tax audit selection by the IRS.
We show how the use of more flexible machine learning methods for selecting audits may affect vertical equity.
Our results have implications for the design of algorithmic tools across the public sector.
arXiv Detail & Related papers (2022-06-20T16:27:06Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - On Learning and Enforcing Latent Assessment Models using Binary Feedback
from Human Auditors Regarding Black-Box Classifiers [1.116812194101501]
We propose a novel model called latent assessment model (LAM) to characterize binary feedback provided by human auditors.
We prove that individual and group fairness notions are guaranteed as long as the auditor's intrinsic judgments inherently satisfy the fairness notion.
We also demonstrate this relationship between LAM and traditional fairness notions on three well-known datasets.
arXiv Detail & Related papers (2022-02-16T18:54:32Z) - Equality before the Law: Legal Judgment Consistency Analysis for
Fairness [55.91612739713396]
In this paper, we propose an evaluation metric for judgment inconsistency, Legal Inconsistency Coefficient (LInCo)
We simulate judges from different groups with legal judgment prediction (LJP) models and measure the judicial inconsistency with the disagreement of the judgment results given by LJP models trained on different groups.
We employ LInCo to explore the inconsistency in real cases and come to the following observations: (1) Both regional and gender inconsistency exist in the legal system, but gender inconsistency is much less than regional inconsistency.
arXiv Detail & Related papers (2021-03-25T14:28:00Z) - On the Identification of Fair Auditors to Evaluate Recommender Systems
based on a Novel Non-Comparative Fairness Notion [1.116812194101501]
Decision-support systems have been found to be discriminatory in the context of many practical deployments.
We propose a new fairness notion based on the principle of non-comparative justice.
We show that the proposed fairness notion also provides guarantees in terms of comparative fairness notions.
arXiv Detail & Related papers (2020-09-09T16:04:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.