Fix Fairness, Don't Ruin Accuracy: Performance Aware Fairness Repair
using AutoML
- URL: http://arxiv.org/abs/2306.09297v3
- Date: Tue, 29 Aug 2023 00:49:40 GMT
- Title: Fix Fairness, Don't Ruin Accuracy: Performance Aware Fairness Repair
using AutoML
- Authors: Giang Nguyen, Sumon Biswas, Hridesh Rajan
- Abstract summary: We propose a novel approach that utilizes automated machine learning (AutoML) techniques to mitigate bias.
By improving the default optimization function of AutoML and incorporating fairness objectives, we are able to mitigate bias with little to no loss of accuracy.
Our approach, Fair-AutoML, successfully repaired 60 out of 64 buggy cases, while existing bias mitigation techniques only repaired up to 44 out of 64 cases.
- Score: 18.17660645381856
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning (ML) is increasingly being used in critical decision-making
software, but incidents have raised questions about the fairness of ML
predictions. To address this issue, new tools and methods are needed to
mitigate bias in ML-based software. Previous studies have proposed bias
mitigation algorithms that only work in specific situations and often result in
a loss of accuracy. Our proposed solution is a novel approach that utilizes
automated machine learning (AutoML) techniques to mitigate bias. Our approach
includes two key innovations: a novel optimization function and a
fairness-aware search space. By improving the default optimization function of
AutoML and incorporating fairness objectives, we are able to mitigate bias with
little to no loss of accuracy. Additionally, we propose a fairness-aware search
space pruning method for AutoML to reduce computational cost and repair time.
Our approach, built on the state-of-the-art Auto-Sklearn tool, is designed to
reduce bias in real-world scenarios. In order to demonstrate the effectiveness
of our approach, we evaluated our approach on four fairness problems and 16
different ML models, and our results show a significant improvement over the
baseline and existing bias mitigation techniques. Our approach, Fair-AutoML,
successfully repaired 60 out of 64 buggy cases, while existing bias mitigation
techniques only repaired up to 44 out of 64 cases.
Related papers
- Unlearning as multi-task optimization: A normalized gradient difference approach with an adaptive learning rate [105.86576388991713]
We introduce a normalized gradient difference (NGDiff) algorithm, enabling us to have better control over the trade-off between the objectives.
We provide a theoretical analysis and empirically demonstrate the superior performance of NGDiff among state-of-the-art unlearning methods on the TOFU and MUSE datasets.
arXiv Detail & Related papers (2024-10-29T14:41:44Z) - The Devil is in the Errors: Leveraging Large Language Models for
Fine-grained Machine Translation Evaluation [93.01964988474755]
AutoMQM is a prompting technique which asks large language models to identify and categorize errors in translations.
We study the impact of labeled data through in-context learning and finetuning.
We then evaluate AutoMQM with PaLM-2 models, and we find that it improves performance compared to just prompting for scores.
arXiv Detail & Related papers (2023-08-14T17:17:21Z) - Leaving the Nest: Going Beyond Local Loss Functions for
Predict-Then-Optimize [57.22851616806617]
We show that our method achieves state-of-the-art results in four domains from the literature.
Our approach outperforms the best existing method by nearly 200% when the localness assumption is broken.
arXiv Detail & Related papers (2023-05-26T11:17:45Z) - Can Fairness be Automated? Guidelines and Opportunities for
Fairness-aware AutoML [52.86328317233883]
We present a comprehensive overview of different ways in which fairness-related harm can arise.
We highlight several open technical challenges for future work in this direction.
arXiv Detail & Related papers (2023-03-15T09:40:08Z) - Towards Fair Machine Learning Software: Understanding and Addressing
Model Bias Through Counterfactual Thinking [16.196269707571904]
We present a novel counterfactual approach to tackle the root causes of bias in Machine Learning software.
Our approach combines models optimized for both performance and fairness, resulting in an optimal solution in both aspects.
arXiv Detail & Related papers (2023-02-16T01:27:26Z) - Learning to Optimize Permutation Flow Shop Scheduling via Graph-based
Imitation Learning [70.65666982566655]
Permutation flow shop scheduling (PFSS) is widely used in manufacturing systems.
We propose to train the model via expert-driven imitation learning, which accelerates convergence more stably and accurately.
Our model's network parameters are reduced to only 37% of theirs, and the solution gap of our model towards the expert solutions decreases from 6.8% to 1.3% on average.
arXiv Detail & Related papers (2022-10-31T09:46:26Z) - A Comprehensive Empirical Study of Bias Mitigation Methods for Software
Fairness [27.67313504037565]
We present a large-scale, comprehensive empirical evaluation of bias mitigation methods.
bias mitigation methods were evaluated with 12 Machine Learning (ML) performance metrics, 4 fairness metrics, and 24 types of fairness-performance trade-off assessment.
The effectiveness of the bias mitigation methods depends on tasks, models, and fairness and ML performance metrics, and there is no'silver bullet' bias mitigation method demonstrated to be effective for all scenarios studied.
arXiv Detail & Related papers (2022-07-07T13:14:49Z) - Learning the Quality of Machine Permutations in Job Shop Scheduling [9.972171952370287]
We propose a novel supervised learning task that aims at predicting the quality of machine permutations.
Then, we design an original methodology to estimate this quality that allows to create an accurate sequential deep learning model.
arXiv Detail & Related papers (2022-07-07T11:53:10Z) - Individually Fair Gradient Boosting [86.1984206610373]
We consider the task of enforcing individual fairness in gradient boosting.
We show that our algorithm converges globally and generalizes.
We also demonstrate the efficacy of our algorithm on three ML problems susceptible to algorithmic bias.
arXiv Detail & Related papers (2021-03-31T03:06:57Z) - Interpret-able feedback for AutoML systems [5.5524559605452595]
Automated machine learning (AutoML) systems aim to enable training machine learning (ML) models for non-ML experts.
A shortcoming of these systems is that when they fail to produce a model with high accuracy, the user has no path to improve the model.
We introduce an interpretable data feedback solution for AutoML.
arXiv Detail & Related papers (2021-02-22T18:54:26Z) - Robusta: Robust AutoML for Feature Selection via Reinforcement Learning [24.24652530951966]
We propose the first robust AutoML framework, Robusta--based on reinforcement learning (RL)
We show that the framework is able to improve the model robustness by up to 22% while maintaining competitive accuracy on benign samples.
arXiv Detail & Related papers (2021-01-15T03:12:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.