Equal Confusion Fairness: Measuring Group-Based Disparities in Automated
Decision Systems
- URL: http://arxiv.org/abs/2307.00472v1
- Date: Sun, 2 Jul 2023 04:44:19 GMT
- Title: Equal Confusion Fairness: Measuring Group-Based Disparities in Automated
Decision Systems
- Authors: Furkan Gursoy and Ioannis A. Kakadiaris
- Abstract summary: This paper proposes a new equal confusion fairness test to check an automated decision system for fairness and a new confusion parity error to quantify the extent of any unfairness.
Overall, the methods and metrics provided here may assess automated decision systems' fairness as part of a more extensive accountability assessment.
- Score: 5.076419064097733
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As artificial intelligence plays an increasingly substantial role in
decisions affecting humans and society, the accountability of automated
decision systems has been receiving increasing attention from researchers and
practitioners. Fairness, which is concerned with eliminating unjust treatment
and discrimination against individuals or sensitive groups, is a critical
aspect of accountability. Yet, for evaluating fairness, there is a plethora of
fairness metrics in the literature that employ different perspectives and
assumptions that are often incompatible. This work focuses on group fairness.
Most group fairness metrics desire a parity between selected statistics
computed from confusion matrices belonging to different sensitive groups.
Generalizing this intuition, this paper proposes a new equal confusion fairness
test to check an automated decision system for fairness and a new confusion
parity error to quantify the extent of any unfairness. To further analyze the
source of potential unfairness, an appropriate post hoc analysis methodology is
also presented. The usefulness of the test, metric, and post hoc analysis is
demonstrated via a case study on the controversial case of COMPAS, an automated
decision system employed in the US to assist judges with assessing recidivism
risks. Overall, the methods and metrics provided here may assess automated
decision systems' fairness as part of a more extensive accountability
assessment, such as those based on the system accountability benchmark.
Related papers
- Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - Counterpart Fairness -- Addressing Systematic between-group Differences in Fairness Evaluation [17.495053606192375]
When using machine learning to aid decision-making, it is critical to ensure that an algorithmic decision is fair and does not discriminate against specific individuals/groups.
Existing group fairness methods aim to ensure equal outcomes across groups delineated by protected variables like race or gender.
In cases where systematic differences between groups play a significant role in outcomes, these methods may overlook the influence of non-protected variables.
arXiv Detail & Related papers (2023-05-29T15:41:12Z) - Perspectives on Large Language Models for Relevance Judgment [56.935731584323996]
Large language models (LLMs) claim that they can assist with relevance judgments.
It is not clear whether automated judgments can reliably be used in evaluations of retrieval systems.
arXiv Detail & Related papers (2023-04-13T13:08:38Z) - Survey on Fairness Notions and Related Tensions [4.257210316104905]
Automated decision systems are increasingly used to take consequential decisions in problems such as job hiring and loan granting.
However, objective machine learning (ML) algorithms are prone to bias, which results in yet unfair decisions.
This paper surveys the commonly used fairness notions and discusses the tensions among them with privacy and accuracy.
arXiv Detail & Related papers (2022-09-16T13:36:05Z) - Error Parity Fairness: Testing for Group Fairness in Regression Tasks [5.076419064097733]
This work presents error parity as a regression fairness notion and introduces a testing methodology to assess group fairness.
It is followed by a suitable permutation test to compare groups on several statistics to explore disparities and identify impacted groups.
Overall, the proposed regression fairness testing methodology fills a gap in the fair machine learning literature and may serve as a part of larger accountability assessments and algorithm audits.
arXiv Detail & Related papers (2022-08-16T17:47:20Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Fair Machine Learning in Healthcare: A Review [90.22219142430146]
We analyze the intersection of fairness in machine learning and healthcare disparities.
We provide a critical review of the associated fairness metrics from a machine learning standpoint.
We propose several new research directions that hold promise for developing ethical and equitable ML applications in healthcare.
arXiv Detail & Related papers (2022-06-29T04:32:10Z) - Fairness Score and Process Standardization: Framework for Fairness
Certification in Artificial Intelligence Systems [0.4297070083645048]
We propose a novel Fairness Score to measure the fairness of a data-driven AI system.
It will also provide a framework to operationalise the concept of fairness and facilitate the commercial deployment of such systems.
arXiv Detail & Related papers (2022-01-10T15:45:12Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - On the Identification of Fair Auditors to Evaluate Recommender Systems
based on a Novel Non-Comparative Fairness Notion [1.116812194101501]
Decision-support systems have been found to be discriminatory in the context of many practical deployments.
We propose a new fairness notion based on the principle of non-comparative justice.
We show that the proposed fairness notion also provides guarantees in terms of comparative fairness notions.
arXiv Detail & Related papers (2020-09-09T16:04:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.