Cascaded Debiasing: Studying the Cumulative Effect of Multiple
Fairness-Enhancing Interventions
- URL: http://arxiv.org/abs/2202.03734v2
- Date: Mon, 22 Aug 2022 19:12:04 GMT
- Title: Cascaded Debiasing: Studying the Cumulative Effect of Multiple
Fairness-Enhancing Interventions
- Authors: Bhavya Ghai, Mihir Mishra, Klaus Mueller
- Abstract summary: This paper investigates the cumulative effect of multiple fairness enhancing interventions at different stages of the machine learning (ML) pipeline.
Applying multiple interventions results in better fairness and lower utility than individual interventions on aggregate.
On the downside, fairness-enhancing interventions can negatively impact different population groups, especially the privileged group.
- Score: 48.98659895355356
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding the cumulative effect of multiple fairness enhancing
interventions at different stages of the machine learning (ML) pipeline is a
critical and underexplored facet of the fairness literature. Such knowledge can
be valuable to data scientists/ML practitioners in designing fair ML pipelines.
This paper takes the first step in exploring this area by undertaking an
extensive empirical study comprising 60 combinations of interventions, 9
fairness metrics, 2 utility metrics (Accuracy and F1 Score) across 4 benchmark
datasets. We quantitatively analyze the experimental data to measure the impact
of multiple interventions on fairness, utility and population groups. We found
that applying multiple interventions results in better fairness and lower
utility than individual interventions on aggregate. However, adding more
interventions do no always result in better fairness or worse utility. The
likelihood of achieving high performance (F1 Score) along with high fairness
increases with larger number of interventions. On the downside, we found that
fairness-enhancing interventions can negatively impact different population
groups, especially the privileged group. This study highlights the need for new
fairness metrics that account for the impact on different population groups
apart from just the disparity between groups. Lastly, we offer a list of
combinations of interventions that perform best for different fairness and
utility metrics to aid the design of fair ML pipelines.
Related papers
- Fairpriori: Improving Biased Subgroup Discovery for Deep Neural Network Fairness [21.439820064223877]
This paper introduces Fairpriori, a novel biased subgroup discovery method.
It incorporates the frequent itemset generation algorithm to facilitate effective and efficient investigation of intersectional bias.
Fairpriori demonstrates superior effectiveness and efficiency when identifying intersectional bias.
arXiv Detail & Related papers (2024-06-25T00:15:13Z) - Fairness-enhancing mixed effects deep learning improves fairness on in- and out-of-distribution clustered (non-iid) data [6.596656267996196]
We introduce the Fair Mixed Effects Deep Learning (Fair MEDL) framework.
Fair MEDL quantifies cluster-invariant fixed effects (FE) and cluster-specific random effects (RE)
We incorporate adversarial debiasing to promote fairness across three key metrics: Equalized Odds, Demographic Parity, and Counterfactual Fairness.
arXiv Detail & Related papers (2023-10-04T20:18:45Z) - Counterpart Fairness -- Addressing Systematic between-group Differences in Fairness Evaluation [17.495053606192375]
It is critical to ensure that an algorithmic decision is fair and does not discriminate against specific individuals/groups.
Existing group fairness methods aim to ensure equal outcomes across groups delineated by protected variables like race or gender.
The confounding factors, which are non-protected variables but manifest systematic differences, can significantly affect fairness evaluation.
arXiv Detail & Related papers (2023-05-29T15:41:12Z) - Learning Informative Representation for Fairness-aware Multivariate
Time-series Forecasting: A Group-based Perspective [50.093280002375984]
Performance unfairness among variables widely exists in multivariate time series (MTS) forecasting models.
We propose a novel framework, named FairFor, for fairness-aware MTS forecasting.
arXiv Detail & Related papers (2023-01-27T04:54:12Z) - Fair Effect Attribution in Parallel Online Experiments [57.13281584606437]
A/B tests serve the purpose of reliably identifying the effect of changes introduced in online services.
It is common for online platforms to run a large number of simultaneous experiments by splitting incoming user traffic randomly.
Despite a perfect randomization between different groups, simultaneous experiments can interact with each other and create a negative impact on average population outcomes.
arXiv Detail & Related papers (2022-10-15T17:15:51Z) - Fairness-aware Model-agnostic Positive and Unlabeled Learning [38.50536380390474]
We propose a fairness-aware Positive and Unlabeled Learning (PUL) method named FairPUL.
For binary classification over individuals from two populations, we aim to achieve similar true positive rates and false positive rates.
Our framework is proven to be statistically consistent in terms of both the classification error and the fairness metric.
arXiv Detail & Related papers (2022-06-19T08:04:23Z) - Normalise for Fairness: A Simple Normalisation Technique for Fairness in Regression Machine Learning Problems [46.93320580613236]
We present a simple, yet effective method based on normalisation (FaiReg) for regression problems.
We compare it with two standard methods for fairness, namely data balancing and adversarial training.
The results show the superior performance of diminishing the effects of unfairness better than data balancing.
arXiv Detail & Related papers (2022-02-02T12:26:25Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Two Simple Ways to Learn Individual Fairness Metrics from Data [47.6390279192406]
Individual fairness is an intuitive definition of algorithmic fairness that addresses some of the drawbacks of group fairness.
The lack of a widely accepted fair metric for many ML tasks is the main barrier to broader adoption of individual fairness.
We show empirically that fair training with the learned metrics leads to improved fairness on three machine learning tasks susceptible to gender and racial biases.
arXiv Detail & Related papers (2020-06-19T23:47:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.