Towards Fair Machine Learning Software: Understanding and Addressing
Model Bias Through Counterfactual Thinking
- URL: http://arxiv.org/abs/2302.08018v1
- Date: Thu, 16 Feb 2023 01:27:26 GMT
- Title: Towards Fair Machine Learning Software: Understanding and Addressing
Model Bias Through Counterfactual Thinking
- Authors: Zichong Wang, Yang Zhou, Meikang Qiu, Israat Haque, Laura Brown, Yi
He, Jianwu Wang, David Lo and Wenbin Zhang
- Abstract summary: We present a novel counterfactual approach to tackle the root causes of bias in Machine Learning software.
Our approach combines models optimized for both performance and fairness, resulting in an optimal solution in both aspects.
- Score: 16.196269707571904
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: The increasing use of Machine Learning (ML) software can lead to unfair and
unethical decisions, thus fairness bugs in software are becoming a growing
concern. Addressing these fairness bugs often involves sacrificing ML
performance, such as accuracy. To address this issue, we present a novel
counterfactual approach that uses counterfactual thinking to tackle the root
causes of bias in ML software. In addition, our approach combines models
optimized for both performance and fairness, resulting in an optimal solution
in both aspects. We conducted a thorough evaluation of our approach on 10
benchmark tasks using a combination of 5 performance metrics, 3 fairness
metrics, and 15 measurement scenarios, all applied to 8 real-world datasets.
The conducted extensive evaluations show that the proposed method significantly
improves the fairness of ML software while maintaining competitive performance,
outperforming state-of-the-art solutions in 84.6% of overall cases based on a
recent benchmarking tool.
Related papers
- Fair Bilevel Neural Network (FairBiNN): On Balancing fairness and accuracy via Stackelberg Equilibrium [0.3350491650545292]
Current methods for mitigating bias often result in information loss and an inadequate balance between accuracy and fairness.
We propose a novel methodology grounded in bilevel optimization principles.
Our deep learning-based approach concurrently optimize for both accuracy and fairness objectives.
arXiv Detail & Related papers (2024-10-21T18:53:39Z) - Improving LLM Reasoning through Scaling Inference Computation with Collaborative Verification [52.095460362197336]
Large language models (LLMs) struggle with consistent and accurate reasoning.
LLMs are trained primarily on correct solutions, reducing their ability to detect and learn from errors.
We propose a novel collaborative method integrating Chain-of-Thought (CoT) and Program-of-Thought (PoT) solutions for verification.
arXiv Detail & Related papers (2024-10-05T05:21:48Z) - MR-Ben: A Meta-Reasoning Benchmark for Evaluating System-2 Thinking in LLMs [55.20845457594977]
Large language models (LLMs) have shown increasing capability in problem-solving and decision-making.
We present a process-based benchmark MR-Ben that demands a meta-reasoning skill.
Our meta-reasoning paradigm is especially suited for system-2 slow thinking.
arXiv Detail & Related papers (2024-06-20T03:50:23Z) - Uncertainty Aware Learning for Language Model Alignment [97.36361196793929]
We propose uncertainty-aware learning (UAL) to improve the model alignment of different task scenarios.
We implement UAL in a simple fashion -- adaptively setting the label smoothing value of training according to the uncertainty of individual samples.
Experiments on widely used benchmarks demonstrate that our UAL significantly and consistently outperforms standard supervised fine-tuning.
arXiv Detail & Related papers (2024-06-07T11:37:45Z) - Evaluating Mathematical Reasoning Beyond Accuracy [50.09931172314218]
We introduce ReasonEval, a new methodology for evaluating the quality of reasoning steps.
We show that ReasonEval achieves state-of-the-art performance on human-labeled datasets.
We observe that ReasonEval can play a significant role in data selection.
arXiv Detail & Related papers (2024-04-08T17:18:04Z) - On Task Performance and Model Calibration with Supervised and
Self-Ensembled In-Context Learning [71.44986275228747]
In-context learning (ICL) has become an efficient approach propelled by the recent advancements in large language models (LLMs)
However, both paradigms are prone to suffer from the critical problem of overconfidence (i.e., miscalibration)
arXiv Detail & Related papers (2023-12-21T11:55:10Z) - A First Look at Fairness of Machine Learning Based Code Reviewer
Recommendation [14.50773969815661]
This paper conducts the first study toward investigating the issue of fairness of ML applications in the software engineering (SE) domain.
Our empirical study demonstrates that current state-of-the-art ML-based code reviewer recommendation techniques exhibit unfairness and discriminating behaviors.
This paper also discusses the reasons why the studied ML-based code reviewer recommendation systems are unfair and provides solutions to mitigate the unfairness.
arXiv Detail & Related papers (2023-07-21T01:57:51Z) - Fix Fairness, Don't Ruin Accuracy: Performance Aware Fairness Repair
using AutoML [18.17660645381856]
We propose a novel approach that utilizes automated machine learning (AutoML) techniques to mitigate bias.
By improving the default optimization function of AutoML and incorporating fairness objectives, we are able to mitigate bias with little to no loss of accuracy.
Our approach, Fair-AutoML, successfully repaired 60 out of 64 buggy cases, while existing bias mitigation techniques only repaired up to 44 out of 64 cases.
arXiv Detail & Related papers (2023-06-15T17:25:15Z) - FITNESS: A Causal De-correlation Approach for Mitigating Bias in Machine
Learning Software [6.4073906779537095]
Biased datasets can lead to unfair and potentially harmful outcomes.
In this paper, we propose a bias mitigation approach via de-correlating the causal effects between sensitive features and the label.
Our key idea is that by de-correlating such effects from a causality perspective, the model would avoid making predictions based on sensitive features.
arXiv Detail & Related papers (2023-05-23T06:24:43Z) - Stochastic Methods for AUC Optimization subject to AUC-based Fairness
Constraints [51.12047280149546]
A direct approach for obtaining a fair predictive model is to train the model through optimizing its prediction performance subject to fairness constraints.
We formulate the training problem of a fairness-aware machine learning model as an AUC optimization problem subject to a class of AUC-based fairness constraints.
We demonstrate the effectiveness of our approach on real-world data under different fairness metrics.
arXiv Detail & Related papers (2022-12-23T22:29:08Z) - A Comprehensive Empirical Study of Bias Mitigation Methods for Software
Fairness [27.67313504037565]
We present a large-scale, comprehensive empirical evaluation of bias mitigation methods.
bias mitigation methods were evaluated with 12 Machine Learning (ML) performance metrics, 4 fairness metrics, and 24 types of fairness-performance trade-off assessment.
The effectiveness of the bias mitigation methods depends on tasks, models, and fairness and ML performance metrics, and there is no'silver bullet' bias mitigation method demonstrated to be effective for all scenarios studied.
arXiv Detail & Related papers (2022-07-07T13:14:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.