A Comprehensive Empirical Study of Bias Mitigation Methods for Software
Fairness
- URL: http://arxiv.org/abs/2207.03277v1
- Date: Thu, 7 Jul 2022 13:14:49 GMT
- Title: A Comprehensive Empirical Study of Bias Mitigation Methods for Software
Fairness
- Authors: Zhenpeng Chen, Jie M. Zhang, Federica Sarro, Mark Harman
- Abstract summary: We present a large-scale, comprehensive empirical evaluation of bias mitigation methods.
bias mitigation methods were evaluated with 12 Machine Learning (ML) performance metrics, 4 fairness metrics, and 24 types of fairness-performance trade-off assessment.
The effectiveness of the bias mitigation methods depends on tasks, models, and fairness and ML performance metrics, and there is no'silver bullet' bias mitigation method demonstrated to be effective for all scenarios studied.
- Score: 27.67313504037565
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Software bias is an increasingly important operational concern for software
engineers. We present a large-scale, comprehensive empirical evaluation of 17
representative bias mitigation methods, evaluated with 12 Machine Learning (ML)
performance metrics, 4 fairness metrics, and 24 types of fairness-performance
trade-off assessment, applied to 8 widely-adopted benchmark software
decision/prediction tasks. The empirical coverage is comprehensive, covering
the largest numbers of bias mitigation methods, evaluation metrics, and
fairness-performance trade-off measures compared to previous work on this
important operational software characteristic. We find that (1) the bias
mitigation methods significantly decrease the values reported by all ML
performance metrics (including those not considered in previous work) in a
large proportion of the scenarios studied (42%~75% according to different ML
performance metrics); (2) the bias mitigation methods achieve fairness
improvement in only approximately 50% over all scenarios and metrics (ranging
between 29%~59% according to the metric used to asses bias/fairness); (3) the
bias mitigation methods have a poor fairness-performance trade-off or even lead
to decreases in both fairness and ML performance in 37% of the scenarios; (4)
the effectiveness of the bias mitigation methods depends on tasks, models, and
fairness and ML performance metrics, and there is no 'silver bullet' bias
mitigation method demonstrated to be effective for all scenarios studied. The
best bias mitigation method that we find outperforms other methods in only 29%
of the scenarios. We have made publicly available the scripts and data used in
this study in order to allow for future replication and extension of our work.
Related papers
- Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Fix Fairness, Don't Ruin Accuracy: Performance Aware Fairness Repair
using AutoML [18.17660645381856]
We propose a novel approach that utilizes automated machine learning (AutoML) techniques to mitigate bias.
By improving the default optimization function of AutoML and incorporating fairness objectives, we are able to mitigate bias with little to no loss of accuracy.
Our approach, Fair-AutoML, successfully repaired 60 out of 64 buggy cases, while existing bias mitigation techniques only repaired up to 44 out of 64 cases.
arXiv Detail & Related papers (2023-06-15T17:25:15Z) - Leaving the Nest: Going Beyond Local Loss Functions for
Predict-Then-Optimize [57.22851616806617]
We show that our method achieves state-of-the-art results in four domains from the literature.
Our approach outperforms the best existing method by nearly 200% when the localness assumption is broken.
arXiv Detail & Related papers (2023-05-26T11:17:45Z) - FITNESS: A Causal De-correlation Approach for Mitigating Bias in Machine
Learning Software [6.4073906779537095]
Biased datasets can lead to unfair and potentially harmful outcomes.
In this paper, we propose a bias mitigation approach via de-correlating the causal effects between sensitive features and the label.
Our key idea is that by de-correlating such effects from a causality perspective, the model would avoid making predictions based on sensitive features.
arXiv Detail & Related papers (2023-05-23T06:24:43Z) - Fairness-Aware Data Valuation for Supervised Learning [4.874780144224057]
We propose Fairness-Aware Data vauatiOn (FADO) to incorporate fairness concerns into a series of ML-related tasks.
We show how FADO can be applied as the basis for unfairness mitigation pre-processing techniques.
Our methods achieve promising results -- up to a 40 p.p. improvement in fairness at a less than 1 p.p. loss in performance compared to a baseline.
arXiv Detail & Related papers (2023-03-29T18:51:13Z) - Towards Fair Machine Learning Software: Understanding and Addressing
Model Bias Through Counterfactual Thinking [16.196269707571904]
We present a novel counterfactual approach to tackle the root causes of bias in Machine Learning software.
Our approach combines models optimized for both performance and fairness, resulting in an optimal solution in both aspects.
arXiv Detail & Related papers (2023-02-16T01:27:26Z) - Systematic Evaluation of Predictive Fairness [60.0947291284978]
Mitigating bias in training on biased datasets is an important open problem.
We examine the performance of various debiasing methods across multiple tasks.
We find that data conditions have a strong influence on relative model performance.
arXiv Detail & Related papers (2022-10-17T05:40:13Z) - Bias Mitigation for Machine Learning Classifiers: A Comprehensive Survey [30.637712832450525]
We collect a total of 341 publications concerning bias mitigation for ML classifiers.
We investigate how existing bias mitigation methods are evaluated in the literature.
Based on the gathered insights, we hope to support practitioners in making informed choices when developing and evaluating new bias mitigation methods.
arXiv Detail & Related papers (2022-07-14T17:16:45Z) - Information-Theoretic Bias Reduction via Causal View of Spurious
Correlation [71.9123886505321]
We propose an information-theoretic bias measurement technique through a causal interpretation of spurious correlation.
We present a novel debiasing framework against the algorithmic bias, which incorporates a bias regularization loss.
The proposed bias measurement and debiasing approaches are validated in diverse realistic scenarios.
arXiv Detail & Related papers (2022-01-10T01:19:31Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.