Error Parity Fairness: Testing for Group Fairness in Regression Tasks
- URL: http://arxiv.org/abs/2208.08279v1
- Date: Tue, 16 Aug 2022 17:47:20 GMT
- Title: Error Parity Fairness: Testing for Group Fairness in Regression Tasks
- Authors: Furkan Gursoy, Ioannis A. Kakadiaris
- Abstract summary: This work presents error parity as a regression fairness notion and introduces a testing methodology to assess group fairness.
It is followed by a suitable permutation test to compare groups on several statistics to explore disparities and identify impacted groups.
Overall, the proposed regression fairness testing methodology fills a gap in the fair machine learning literature and may serve as a part of larger accountability assessments and algorithm audits.
- Score: 5.076419064097733
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The applications of Artificial Intelligence (AI) surround decisions on
increasingly many aspects of human lives. Society responds by imposing legal
and social expectations for the accountability of such automated decision
systems (ADSs). Fairness, a fundamental constituent of AI accountability, is
concerned with just treatment of individuals and sensitive groups (e.g., based
on sex, race). While many studies focus on fair learning and fairness testing
for the classification tasks, the literature is rather limited on how to
examine fairness in regression tasks. This work presents error parity as a
regression fairness notion and introduces a testing methodology to assess group
fairness based on a statistical hypothesis testing procedure. The error parity
test checks whether prediction errors are distributed similarly across
sensitive groups to determine if an ADS is fair. It is followed by a suitable
permutation test to compare groups on several statistics to explore disparities
and identify impacted groups. The usefulness and applicability of the proposed
methodology are demonstrated via a case study on COVID-19 projections in the US
at the county level, which revealed race-based differences in forecast errors.
Overall, the proposed regression fairness testing methodology fills a gap in
the fair machine learning literature and may serve as a part of larger
accountability assessments and algorithm audits.
Related papers
- Fairness Evaluation with Item Response Theory [10.871079276188649]
This paper proposes a novel Fair-IRT framework to evaluate fairness in Machine Learning (ML) models.
Detailed explanations for item characteristic curves (ICCs) are provided for particular individuals.
Experiments demonstrate the effectiveness of this framework as a fairness evaluation tool.
arXiv Detail & Related papers (2024-10-20T22:25:20Z) - Equal Confusion Fairness: Measuring Group-Based Disparities in Automated
Decision Systems [5.076419064097733]
This paper proposes a new equal confusion fairness test to check an automated decision system for fairness and a new confusion parity error to quantify the extent of any unfairness.
Overall, the methods and metrics provided here may assess automated decision systems' fairness as part of a more extensive accountability assessment.
arXiv Detail & Related papers (2023-07-02T04:44:19Z) - The Flawed Foundations of Fair Machine Learning [0.0]
We show that there is a trade-off between statistically accurate outcomes and group similar outcomes in any data setting where group disparities exist.
We introduce a proof-of-concept evaluation to aid researchers and designers in understanding the relationship between statistically accurate outcomes and group similar outcomes.
arXiv Detail & Related papers (2023-06-02T10:07:12Z) - Counterpart Fairness -- Addressing Systematic between-group Differences in Fairness Evaluation [17.495053606192375]
When using machine learning to aid decision-making, it is critical to ensure that an algorithmic decision is fair and does not discriminate against specific individuals/groups.
Existing group fairness methods aim to ensure equal outcomes across groups delineated by protected variables like race or gender.
In cases where systematic differences between groups play a significant role in outcomes, these methods may overlook the influence of non-protected variables.
arXiv Detail & Related papers (2023-05-29T15:41:12Z) - Auditing ICU Readmission Rates in an Clinical Database: An Analysis of
Risk Factors and Clinical Outcomes [0.0]
This study presents a machine learning pipeline for clinical data classification in the context of a 30-day readmission problem.
The fairness audit uncovers disparities in equal opportunity, predictive parity, false positive rate parity, and false negative rate parity criteria.
The study suggests the need for collaborative efforts among researchers, policymakers, and practitioners to address bias and fairness in artificial intelligence (AI) systems.
arXiv Detail & Related papers (2023-04-12T17:09:38Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Fair Machine Learning in Healthcare: A Review [90.22219142430146]
We analyze the intersection of fairness in machine learning and healthcare disparities.
We provide a critical review of the associated fairness metrics from a machine learning standpoint.
We propose several new research directions that hold promise for developing ethical and equitable ML applications in healthcare.
arXiv Detail & Related papers (2022-06-29T04:32:10Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Normalise for Fairness: A Simple Normalisation Technique for Fairness in Regression Machine Learning Problems [46.93320580613236]
We present a simple, yet effective method based on normalisation (FaiReg) for regression problems.
We compare it with two standard methods for fairness, namely data balancing and adversarial training.
The results show the superior performance of diminishing the effects of unfairness better than data balancing.
arXiv Detail & Related papers (2022-02-02T12:26:25Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.