FairALM: Augmented Lagrangian Method for Training Fair Models with
Little Regret
- URL: http://arxiv.org/abs/2004.01355v2
- Date: Wed, 24 Jun 2020 00:17:37 GMT
- Title: FairALM: Augmented Lagrangian Method for Training Fair Models with
Little Regret
- Authors: Vishnu Suresh Lokhande, Aditya Kumar Akash, Sathya N. Ravi and Vikas
Singh
- Abstract summary: It is now accepted that because of biases in the datasets we present to the models, a fairness-oblivious training will lead to unfair models.
Here, we study mechanisms that impose fairness concurrently while training the model.
- Score: 42.66567001275493
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Algorithmic decision making based on computer vision and machine learning
technologies continue to permeate our lives. But issues related to biases of
these models and the extent to which they treat certain segments of the
population unfairly, have led to concern in the general public. It is now
accepted that because of biases in the datasets we present to the models, a
fairness-oblivious training will lead to unfair models. An interesting topic is
the study of mechanisms via which the de novo design or training of the model
can be informed by fairness measures. Here, we study mechanisms that impose
fairness concurrently while training the model. While existing fairness based
approaches in vision have largely relied on training adversarial modules
together with the primary classification/regression task, in an effort to
remove the influence of the protected attribute or variable, we show how ideas
based on well-known optimization concepts can provide a simpler alternative. In
our proposed scheme, imposing fairness just requires specifying the protected
attribute and utilizing our optimization routine. We provide a detailed
technical analysis and present experiments demonstrating that various fairness
measures from the literature can be reliably imposed on a number of training
tasks in vision in a manner that is interpretable.
Related papers
- Enhancing Fairness and Performance in Machine Learning Models: A Multi-Task Learning Approach with Monte-Carlo Dropout and Pareto Optimality [1.5498930424110338]
This study introduces an approach to mitigate bias in machine learning by leveraging model uncertainty.
Our approach utilizes a multi-task learning (MTL) framework combined with Monte Carlo (MC) Dropout to assess and mitigate uncertainty in predictions related to protected labels.
arXiv Detail & Related papers (2024-04-12T04:17:50Z) - Fair Few-shot Learning with Auxiliary Sets [53.30014767684218]
In many machine learning (ML) tasks, only very few labeled data samples can be collected, which can lead to inferior fairness performance.
In this paper, we define the fairness-aware learning task with limited training samples as the emphfair few-shot learning problem.
We devise a novel framework that accumulates fairness-aware knowledge across different meta-training tasks and then generalizes the learned knowledge to meta-test tasks.
arXiv Detail & Related papers (2023-08-28T06:31:37Z) - Revealing Unfair Models by Mining Interpretable Evidence [50.48264727620845]
The popularity of machine learning has increased the risk of unfair models getting deployed in high-stake applications.
In this paper, we tackle the novel task of revealing unfair models by mining interpretable evidence.
Our method finds highly interpretable and solid evidence to effectively reveal the unfairness of trained models.
arXiv Detail & Related papers (2022-07-12T20:03:08Z) - Explain, Edit, and Understand: Rethinking User Study Design for
Evaluating Model Explanations [97.91630330328815]
We conduct a crowdsourcing study, where participants interact with deception detection models that have been trained to distinguish between genuine and fake hotel reviews.
We observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control.
arXiv Detail & Related papers (2021-12-17T18:29:56Z) - Modeling Techniques for Machine Learning Fairness: A Survey [17.925809181329015]
In recent years, various techniques have been developed to mitigate the bias for machine learning models.
In this survey, we review the current progress of in-processing bias mitigation techniques.
arXiv Detail & Related papers (2021-11-04T17:17:26Z) - Contrastive Learning for Fair Representations [50.95604482330149]
Trained classification models can unintentionally lead to biased representations and predictions.
Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise.
We propose a method for mitigating bias by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations.
arXiv Detail & Related papers (2021-09-22T10:47:51Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Biased Models Have Biased Explanations [10.9397029555303]
We study fairness in Machine Learning (FairML) through the lens of attribute-based explanations generated for machine learning models.
We first translate existing statistical notions of group fairness and define these notions in terms of explanations given by the model.
Then, we propose a novel way of detecting (un)fairness for any black box model.
arXiv Detail & Related papers (2020-12-20T18:09:45Z) - Do the Machine Learning Models on a Crowd Sourced Platform Exhibit Bias?
An Empirical Study on Model Fairness [7.673007415383724]
We have created a benchmark of 40 top-rated models from Kaggle used for 5 different tasks.
We have applied 7 mitigation techniques on these models and analyzed the fairness, mitigation results, and impacts on performance.
arXiv Detail & Related papers (2020-05-21T23:35:53Z) - Ethical Adversaries: Towards Mitigating Unfairness with Adversarial
Machine Learning [8.436127109155008]
Individuals, as well as organisations, notice, test, and criticize unfair results to hold model designers and deployers accountable.
We offer a framework that assists these groups in mitigating unfair representations stemming from the training datasets.
Our framework relies on two inter-operating adversaries to improve fairness.
arXiv Detail & Related papers (2020-05-14T10:10:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.