Making ML models fairer through explanations: the case of LimeOut
- URL: http://arxiv.org/abs/2011.00603v1
- Date: Sun, 1 Nov 2020 19:07:11 GMT
- Title: Making ML models fairer through explanations: the case of LimeOut
- Authors: Guilherme Alves, Vaishnavi Bhargava, Miguel Couceiro, Amedeo Napoli
- Abstract summary: Algorithmic decisions are now being used on a daily basis, and based on Machine Learning (ML) processes that may be complex and biased.
This raises several concerns given the critical impact that biased decisions may have on individuals or on society as a whole.
We show how the simple idea of "feature dropout" followed by an "ensemble approach" can improve model fairness.
- Score: 7.952582509792971
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Algorithmic decisions are now being used on a daily basis, and based on
Machine Learning (ML) processes that may be complex and biased. This raises
several concerns given the critical impact that biased decisions may have on
individuals or on society as a whole. Not only unfair outcomes affect human
rights, they also undermine public trust in ML and AI. In this paper we address
fairness issues of ML models based on decision outcomes, and we show how the
simple idea of "feature dropout" followed by an "ensemble approach" can improve
model fairness. To illustrate, we will revisit the case of "LimeOut" that was
proposed to tackle "process fairness", which measures a model's reliance on
sensitive or discriminatory features. Given a classifier, a dataset and a set
of sensitive features, LimeOut first assesses whether the classifier is fair by
checking its reliance on sensitive features using "Lime explanations". If
deemed unfair, LimeOut then applies feature dropout to obtain a pool of
classifiers. These are then combined into an ensemble classifier that was
empirically shown to be less dependent on sensitive features without
compromising the classifier's accuracy. We present different experiments on
multiple datasets and several state of the art classifiers, which show that
LimeOut's classifiers improve (or at least maintain) not only process fairness
but also other fairness metrics such as individual and group fairness, equal
opportunity, and demographic parity.
Related papers
- Classes Are Not Equal: An Empirical Study on Image Recognition Fairness [100.36114135663836]
We experimentally demonstrate that classes are not equal and the fairness issue is prevalent for image classification models across various datasets.
Our findings reveal that models tend to exhibit greater prediction biases for classes that are more challenging to recognize.
Data augmentation and representation learning algorithms improve overall performance by promoting fairness to some degree in image classification.
arXiv Detail & Related papers (2024-02-28T07:54:50Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Non-Invasive Fairness in Learning through the Lens of Data Drift [88.37640805363317]
We show how to improve the fairness of Machine Learning models without altering the data or the learning algorithm.
We use a simple but key insight: the divergence of trends between different populations, and, consecutively, between a learned model and minority populations, is analogous to data drift.
We explore two strategies (model-splitting and reweighing) to resolve this drift, aiming to improve the overall conformance of models to the underlying data.
arXiv Detail & Related papers (2023-03-30T17:30:42Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Metrics and methods for a systematic comparison of fairness-aware
machine learning algorithms [0.0]
This study is the most comprehensive of its kind.
It considers fairness, predictive-performance, calibration quality, and speed of 28 different modelling pipelines.
We also found that fairness-aware algorithms can induce fairness without material drops in predictive power.
arXiv Detail & Related papers (2020-10-08T13:58:09Z) - LimeOut: An Ensemble Approach To Improve Process Fairness [8.9379057739817]
We propose a framework that relies on "feature drop-out" to tackle process fairness.
We make use of "LIME Explanations" to assess a classifier's fairness and to determine the sensitive features to remove.
This produces a pool of classifiers whose ensemble is shown empirically to be less dependent on sensitive features, and with improved or no impact on accuracy.
arXiv Detail & Related papers (2020-06-17T09:00:58Z) - Fairness-Aware Learning with Prejudice Free Representations [2.398608007786179]
We propose a novel algorithm that can effectively identify and treat latent discriminating features.
The approach helps to collect discrimination-free features that would improve the model performance.
arXiv Detail & Related papers (2020-02-26T10:06:31Z) - Recovering from Biased Data: Can Fairness Constraints Improve Accuracy? [11.435833538081557]
Empirical Risk Minimization (ERM) may produce a classifier that not only is biased but also has suboptimal accuracy on the true data distribution.
We examine the ability of fairness-constrained ERM to correct this problem.
We also consider other recovery methods including reweighting the training data, Equalized Odds, and Demographic Parity.
arXiv Detail & Related papers (2019-12-02T22:00:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.