FARF: A Fair and Adaptive Random Forests Classifier
- URL: http://arxiv.org/abs/2108.07403v1
- Date: Tue, 17 Aug 2021 02:06:54 GMT
- Title: FARF: A Fair and Adaptive Random Forests Classifier
- Authors: Wenbin Zhang, Albert Bifet, Xiangliang Zhang, Jeremy C. Weiss and
Wolfgang Nejdl
- Abstract summary: We propose a flexible ensemble algorithm for fair decision-making in the more challenging context of evolving online settings.
This algorithm, called FARF (Fair and Adaptive Random Forests), is based on using online component classifiers and updating them according to the current distribution.
Experiments on real-world discriminated data streams demonstrate the utility of FARF.
- Score: 34.94595588778864
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: As Artificial Intelligence (AI) is used in more applications, the need to
consider and mitigate biases from the learned models has followed. Most works
in developing fair learning algorithms focus on the offline setting. However,
in many real-world applications data comes in an online fashion and needs to be
processed on the fly. Moreover, in practical application, there is a trade-off
between accuracy and fairness that needs to be accounted for, but current
methods often have multiple hyperparameters with non-trivial interaction to
achieve fairness. In this paper, we propose a flexible ensemble algorithm for
fair decision-making in the more challenging context of evolving online
settings. This algorithm, called FARF (Fair and Adaptive Random Forests), is
based on using online component classifiers and updating them according to the
current distribution, that also accounts for fairness and a single
hyperparameters that alters fairness-accuracy balance. Experiments on
real-world discriminated data streams demonstrate the utility of FARF.
Related papers
- Non-Invasive Fairness in Learning through the Lens of Data Drift [88.37640805363317]
We show how to improve the fairness of Machine Learning models without altering the data or the learning algorithm.
We use a simple but key insight: the divergence of trends between different populations, and, consecutively, between a learned model and minority populations, is analogous to data drift.
We explore two strategies (model-splitting and reweighing) to resolve this drift, aiming to improve the overall conformance of models to the underlying data.
arXiv Detail & Related papers (2023-03-30T17:30:42Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - Preventing Discriminatory Decision-making in Evolving Data Streams [8.952662914331901]
Bias in machine learning has rightly received significant attention over the last decade.
Most fair machine learning (fair-ML) work to address bias in decision-making systems has focused solely on the offline setting.
Despite the wide prevalence of online systems in the real world, work on identifying and correcting bias in the online setting is severely lacking.
arXiv Detail & Related papers (2023-02-16T01:20:08Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Fairness Reprogramming [42.65700878967251]
We propose a new generic fairness learning paradigm, called FairReprogram, which incorporates the model reprogramming technique.
Specifically, FairReprogram considers the case where models can not be changed and appends to the input a set of perturbations, called the fairness trigger.
We show both theoretically and empirically that the fairness trigger can effectively obscure demographic biases in the output prediction of fixed ML models.
arXiv Detail & Related papers (2022-09-21T09:37:00Z) - Adaptive Fairness-Aware Online Meta-Learning for Changing Environments [29.073555722548956]
The fairness-aware online learning framework has arisen as a powerful tool for the continual lifelong learning setting.
Existing methods make heavy use of the i.i.d assumption for data and hence provide static regret analysis for the framework.
We propose a novel adaptive fairness-aware online meta-learning algorithm, namely FairSAOML, which is able to adapt to changing environments in both bias control and model precision.
arXiv Detail & Related papers (2022-05-20T15:29:38Z) - DECAF: Generating Fair Synthetic Data Using Causally-Aware Generative
Networks [71.6879432974126]
We introduce DECAF: a GAN-based fair synthetic data generator for tabular data.
We show that DECAF successfully removes undesired bias and is capable of generating high-quality synthetic data.
We provide theoretical guarantees on the generator's convergence and the fairness of downstream models.
arXiv Detail & Related papers (2021-10-25T12:39:56Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Metrics and methods for a systematic comparison of fairness-aware
machine learning algorithms [0.0]
This study is the most comprehensive of its kind.
It considers fairness, predictive-performance, calibration quality, and speed of 28 different modelling pipelines.
We also found that fairness-aware algorithms can induce fairness without material drops in predictive power.
arXiv Detail & Related papers (2020-10-08T13:58:09Z) - A Bandit-Based Algorithm for Fairness-Aware Hyperparameter Optimization [5.337302350000984]
We present Fairband, a bandit-based fairness-aware hyper parameter optimization (HO) algorithm.
By introducing fairness notions into HO, we enable seamless and efficient integration of fairness objectives into real-world ML pipelines.
We show that Fairband can efficiently navigate the fairness-accuracy trade-off through hyper parameter optimization.
arXiv Detail & Related papers (2020-10-07T21:35:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.