Promoting Fairness through Hyperparameter Optimization
- URL: http://arxiv.org/abs/2103.12715v1
- Date: Tue, 23 Mar 2021 17:36:22 GMT
- Title: Promoting Fairness through Hyperparameter Optimization
- Authors: Andr\'e F. Cruz, Pedro Saleiro, Catarina Bel\'em, Carlos Soares, Pedro
Bizarro
- Abstract summary: This work explores, in the context of a real-world fraud detection application, the unfairness that emerges from traditional ML model development.
We propose and evaluate fairness-aware variants of three popular HO algorithms: Fair Random Search, Fair TPE, and Fairband.
We validate our approach on a real-world bank account opening fraud use case, as well as on three datasets from the fairness literature.
- Score: 4.479834103607383
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Considerable research effort has been guided towards algorithmic fairness but
real-world adoption of bias reduction techniques is still scarce. Existing
methods are either metric- or model-specific, require access to sensitive
attributes at inference time, or carry high development and deployment costs.
This work explores, in the context of a real-world fraud detection application,
the unfairness that emerges from traditional ML model development, and how to
mitigate it with a simple and easily deployed intervention: fairness-aware
hyperparameter optimization (HO). We propose and evaluate fairness-aware
variants of three popular HO algorithms: Fair Random Search, Fair TPE, and
Fairband. Our method enables practitioners to adapt pre-existing business
operations to accommodate fairness objectives in a frictionless way and with
controllable fairness-accuracy trade-offs. Additionally, it can be coupled with
existing bias reduction techniques to tune their hyperparameters. We validate
our approach on a real-world bank account opening fraud use case, as well as on
three datasets from the fairness literature. Results show that, without extra
training cost, it is feasible to find models with 111% average fairness
increase and just 6% decrease in predictive accuracy, when compared to standard
fairness-blind HO.
Related papers
- Towards Harmless Rawlsian Fairness Regardless of Demographic Prior [57.30787578956235]
We explore the potential for achieving fairness without compromising its utility when no prior demographics are provided to the training set.
We propose a simple but effective method named VFair to minimize the variance of training losses inside the optimal set of empirical losses.
arXiv Detail & Related papers (2024-11-04T12:40:34Z) - Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Fair-CDA: Continuous and Directional Augmentation for Group Fairness [48.84385689186208]
We propose a fine-grained data augmentation strategy for imposing fairness constraints.
We show that group fairness can be achieved by regularizing the models on transition paths of sensitive features between groups.
Our proposed method does not assume any data generative model and ensures good generalization for both accuracy and fairness.
arXiv Detail & Related papers (2023-04-01T11:23:00Z) - FairAdaBN: Mitigating unfairness with adaptive batch normalization and
its application to dermatological disease classification [14.589159162086926]
We propose FairAdaBN, which makes batch normalization adaptive to sensitive attribute.
We propose a new metric, named Fairness-Accuracy Trade-off Efficiency (FATE), to compute normalized fairness improvement over accuracy drop.
Experiments on two dermatological datasets show that our proposed method outperforms other methods on fairness criteria and FATE.
arXiv Detail & Related papers (2023-03-15T02:22:07Z) - Fairness Reprogramming [42.65700878967251]
We propose a new generic fairness learning paradigm, called FairReprogram, which incorporates the model reprogramming technique.
Specifically, FairReprogram considers the case where models can not be changed and appends to the input a set of perturbations, called the fairness trigger.
We show both theoretically and empirically that the fairness trigger can effectively obscure demographic biases in the output prediction of fixed ML models.
arXiv Detail & Related papers (2022-09-21T09:37:00Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - Metrics and methods for a systematic comparison of fairness-aware
machine learning algorithms [0.0]
This study is the most comprehensive of its kind.
It considers fairness, predictive-performance, calibration quality, and speed of 28 different modelling pipelines.
We also found that fairness-aware algorithms can induce fairness without material drops in predictive power.
arXiv Detail & Related papers (2020-10-08T13:58:09Z) - A Bandit-Based Algorithm for Fairness-Aware Hyperparameter Optimization [5.337302350000984]
We present Fairband, a bandit-based fairness-aware hyper parameter optimization (HO) algorithm.
By introducing fairness notions into HO, we enable seamless and efficient integration of fairness objectives into real-world ML pipelines.
We show that Fairband can efficiently navigate the fairness-accuracy trade-off through hyper parameter optimization.
arXiv Detail & Related papers (2020-10-07T21:35:16Z) - SenSeI: Sensitive Set Invariance for Enforcing Individual Fairness [50.916483212900275]
We first formulate a version of individual fairness that enforces invariance on certain sensitive sets.
We then design a transport-based regularizer that enforces this version of individual fairness and develop an algorithm to minimize the regularizer efficiently.
arXiv Detail & Related papers (2020-06-25T04:31:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.