A Bandit-Based Algorithm for Fairness-Aware Hyperparameter Optimization
- URL: http://arxiv.org/abs/2010.03665v2
- Date: Thu, 22 Oct 2020 16:37:39 GMT
- Title: A Bandit-Based Algorithm for Fairness-Aware Hyperparameter Optimization
- Authors: Andr\'e F. Cruz, Pedro Saleiro, Catarina Bel\'em, Carlos Soares, Pedro
Bizarro
- Abstract summary: We present Fairband, a bandit-based fairness-aware hyper parameter optimization (HO) algorithm.
By introducing fairness notions into HO, we enable seamless and efficient integration of fairness objectives into real-world ML pipelines.
We show that Fairband can efficiently navigate the fairness-accuracy trade-off through hyper parameter optimization.
- Score: 5.337302350000984
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Considerable research effort has been guided towards algorithmic fairness but
there is still no major breakthrough. In practice, an exhaustive search over
all possible techniques and hyperparameters is needed to find optimal
fairness-accuracy trade-offs. Hence, coupled with the lack of tools for ML
practitioners, real-world adoption of bias reduction methods is still scarce.
To address this, we present Fairband, a bandit-based fairness-aware
hyperparameter optimization (HO) algorithm. Fairband is conceptually simple,
resource-efficient, easy to implement, and agnostic to both the objective
metrics, model types and the hyperparameter space being explored. Moreover, by
introducing fairness notions into HO, we enable seamless and efficient
integration of fairness objectives into real-world ML pipelines. We compare
Fairband with popular HO methods on four real-world decision-making datasets.
We show that Fairband can efficiently navigate the fairness-accuracy trade-off
through hyperparameter optimization. Furthermore, without extra training cost,
it consistently finds configurations attaining substantially improved fairness
at a comparatively small decrease in predictive accuracy.
Related papers
- Boosting Fair Classifier Generalization through Adaptive Priority Reweighing [59.801444556074394]
A performance-promising fair algorithm with better generalizability is needed.
This paper proposes a novel adaptive reweighing method to eliminate the impact of the distribution shifts between training and test data on model generalizability.
arXiv Detail & Related papers (2023-09-15T13:04:55Z) - Learning Regions of Interest for Bayesian Optimization with Adaptive
Level-Set Estimation [84.0621253654014]
We propose a framework, called BALLET, which adaptively filters for a high-confidence region of interest.
We show theoretically that BALLET can efficiently shrink the search space, and can exhibit a tighter regret bound than standard BO.
arXiv Detail & Related papers (2023-07-25T09:45:47Z) - Fair-CDA: Continuous and Directional Augmentation for Group Fairness [48.84385689186208]
We propose a fine-grained data augmentation strategy for imposing fairness constraints.
We show that group fairness can be achieved by regularizing the models on transition paths of sensitive features between groups.
Our proposed method does not assume any data generative model and ensures good generalization for both accuracy and fairness.
arXiv Detail & Related papers (2023-04-01T11:23:00Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Fairness-aware Configuration of Machine Learning Libraries [21.416261003364177]
This paper investigates the parameter space of machine learning (ML) algorithms in aggravating or mitigating fairness bugs.
Three search-based software testing algorithms are proposed to uncover the precision-fairness frontier.
arXiv Detail & Related papers (2022-02-13T04:04:33Z) - FARF: A Fair and Adaptive Random Forests Classifier [34.94595588778864]
We propose a flexible ensemble algorithm for fair decision-making in the more challenging context of evolving online settings.
This algorithm, called FARF (Fair and Adaptive Random Forests), is based on using online component classifiers and updating them according to the current distribution.
Experiments on real-world discriminated data streams demonstrate the utility of FARF.
arXiv Detail & Related papers (2021-08-17T02:06:54Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Pareto Efficient Fairness in Supervised Learning: From Extraction to
Tracing [26.704236797908177]
algorithmic decision-making systems are becoming more pervasive.
Due to the inherent trade-off between measures and accuracy, it is desirable to ensure the trade-off between overall loss and other criteria.
We propose a definition-agnostic, meaning that any well-defined notion of can be reduced to the PEF notion.
arXiv Detail & Related papers (2021-04-04T15:49:35Z) - Promoting Fairness through Hyperparameter Optimization [4.479834103607383]
This work explores, in the context of a real-world fraud detection application, the unfairness that emerges from traditional ML model development.
We propose and evaluate fairness-aware variants of three popular HO algorithms: Fair Random Search, Fair TPE, and Fairband.
We validate our approach on a real-world bank account opening fraud use case, as well as on three datasets from the fairness literature.
arXiv Detail & Related papers (2021-03-23T17:36:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.