fairmodels: A Flexible Tool For Bias Detection, Visualization, And
Mitigation
- URL: http://arxiv.org/abs/2104.00507v1
- Date: Thu, 1 Apr 2021 15:06:13 GMT
- Title: fairmodels: A Flexible Tool For Bias Detection, Visualization, And
Mitigation
- Authors: Jakub Wi\'sniewski, Przemys{\l}aw Biecek
- Abstract summary: This article introduces an R package fairmodels that helps to validate fairness and eliminate bias in classification models.
The implemented set of functions and fairness metrics enables model fairness validation from different perspectives.
The package includes a series of methods for bias mitigation that aim to diminish the discrimination in the model.
- Score: 3.548416925804316
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning decision systems are getting omnipresent in our lives. From
dating apps to rating loan seekers, algorithms affect both our well-being and
future. Typically, however, these systems are not infallible. Moreover, complex
predictive models are really eager to learn social biases present in historical
data that can lead to increasing discrimination. If we want to create models
responsibly then we need tools for in-depth validation of models also from the
perspective of potential discrimination. This article introduces an R package
fairmodels that helps to validate fairness and eliminate bias in classification
models in an easy and flexible fashion. The fairmodels package offers a
model-agnostic approach to bias detection, visualization and mitigation. The
implemented set of functions and fairness metrics enables model fairness
validation from different perspectives. The package includes a series of
methods for bias mitigation that aim to diminish the discrimination in the
model. The package is designed not only to examine a single model, but also to
facilitate comparisons between multiple models.
Related papers
- Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - fairml: A Statistician's Take on Fair Machine Learning Modelling [0.0]
We describe the fairml package which implements our previous work (Scutari, Panero, and Proissl 2022) and related models in the literature.
fairml is designed around classical statistical models and penalised regression results.
The constraint used to enforce fairness is to model estimation, making it possible to mix-and-match the desired model family and fairness definition for each application.
arXiv Detail & Related papers (2023-05-03T09:59:53Z) - Debiasing Vision-Language Models via Biased Prompts [79.04467131711775]
We propose a general approach for debiasing vision-language foundation models by projecting out biased directions in the text embedding.
We show that debiasing only the text embedding with a calibrated projection matrix suffices to yield robust classifiers and fair generative models.
arXiv Detail & Related papers (2023-01-31T20:09:33Z) - Investigating Ensemble Methods for Model Robustness Improvement of Text
Classifiers [66.36045164286854]
We analyze a set of existing bias features and demonstrate there is no single model that works best for all the cases.
By choosing an appropriate bias model, we can obtain a better robustness result than baselines with a more sophisticated model design.
arXiv Detail & Related papers (2022-10-28T17:52:10Z) - Revealing Unfair Models by Mining Interpretable Evidence [50.48264727620845]
The popularity of machine learning has increased the risk of unfair models getting deployed in high-stake applications.
In this paper, we tackle the novel task of revealing unfair models by mining interpretable evidence.
Our method finds highly interpretable and solid evidence to effectively reveal the unfairness of trained models.
arXiv Detail & Related papers (2022-07-12T20:03:08Z) - xFAIR: Better Fairness via Model-based Rebalancing of Protected
Attributes [15.525314212209564]
Machine learning software can generate models that inappropriately discriminate against specific protected social groups.
We propose xFAIR, a model-based extrapolation method, that is capable of both mitigating bias and explaining the cause.
arXiv Detail & Related papers (2021-10-03T22:10:14Z) - Characterizing Fairness Over the Set of Good Models Under Selective
Labels [69.64662540443162]
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance.
We provide tractable algorithms to compute the range of attainable group-level predictive disparities.
We extend our framework to address the empirically relevant challenge of selectively labelled data.
arXiv Detail & Related papers (2021-01-02T02:11:37Z) - Learning from others' mistakes: Avoiding dataset biases without modeling
them [111.17078939377313]
State-of-the-art natural language processing (NLP) models often learn to model dataset biases and surface form correlations instead of features that target the intended task.
Previous work has demonstrated effective methods to circumvent these issues when knowledge of the bias is available.
We show a method for training models that learn to ignore these problematic correlations.
arXiv Detail & Related papers (2020-12-02T16:10:54Z) - Fairness-Aware Learning with Prejudice Free Representations [2.398608007786179]
We propose a novel algorithm that can effectively identify and treat latent discriminating features.
The approach helps to collect discrimination-free features that would improve the model performance.
arXiv Detail & Related papers (2020-02-26T10:06:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.