Coping with Mistreatment in Fair Algorithms
- URL: http://arxiv.org/abs/2102.10750v1
- Date: Mon, 22 Feb 2021 03:26:06 GMT
- Title: Coping with Mistreatment in Fair Algorithms
- Authors: Ankit Kulshrestha, Ilya Safro
- Abstract summary: We study the algorithmic fairness in a supervised learning setting and examine the effect of optimizing a classifier for the Equal Opportunity metric.
We propose a conceptually simple method to mitigate this bias.
We rigorously analyze the proposed method and evaluate it on several real world datasets demonstrating its efficacy.
- Score: 1.2183405753834557
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning actively impacts our everyday life in almost all endeavors
and domains such as healthcare, finance, and energy. As our dependence on the
machine learning increases, it is inevitable that these algorithms will be used
to make decisions that will have a direct impact on the society spanning all
resolutions from personal choices to world-wide policies. Hence, it is crucial
to ensure that (un)intentional bias does not affect the machine learning
algorithms especially when they are required to take decisions that may have
unintended consequences. Algorithmic fairness techniques have found traction in
the machine learning community and many methods and metrics have been proposed
to ensure and evaluate fairness in algorithms and data collection.
In this paper, we study the algorithmic fairness in a supervised learning
setting and examine the effect of optimizing a classifier for the Equal
Opportunity metric. We demonstrate that such a classifier has an increased
false positive rate across sensitive groups and propose a conceptually simple
method to mitigate this bias. We rigorously analyze the proposed method and
evaluate it on several real world datasets demonstrating its efficacy.
Related papers
- Relevance-aware Algorithmic Recourse [3.6141428739228894]
Algorithmic recourse emerges as a tool for clarifying decisions made by predictive models.
Current algorithmic recourse methods treat all domain values equally, which is unrealistic in real-world settings.
We propose a novel framework, Relevance-Aware Algorithmic Recourse (RAAR), that leverages the concept of relevance in applying algorithmic recourse to regression tasks.
arXiv Detail & Related papers (2024-05-29T13:25:49Z) - Boosting Fair Classifier Generalization through Adaptive Priority Reweighing [59.801444556074394]
A performance-promising fair algorithm with better generalizability is needed.
This paper proposes a novel adaptive reweighing method to eliminate the impact of the distribution shifts between training and test data on model generalizability.
arXiv Detail & Related papers (2023-09-15T13:04:55Z) - Measuring, Interpreting, and Improving Fairness of Algorithms using
Causal Inference and Randomized Experiments [8.62694928567939]
We present an algorithm-agnostic framework (MIIF) to Measure, Interpret, and Improve the Fairness of an algorithmic decision.
We measure the algorithm bias using randomized experiments, which enables the simultaneous measurement of disparate treatment, disparate impact, and economic value.
We also develop an explainable machine learning model which accurately interprets and distills the beliefs of a blackbox algorithm.
arXiv Detail & Related papers (2023-09-04T19:45:18Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Evaluating Machine Unlearning via Epistemic Uncertainty [78.27542864367821]
This work presents an evaluation of Machine Unlearning algorithms based on uncertainty.
This is the first definition of a general evaluation of our best knowledge.
arXiv Detail & Related papers (2022-08-23T09:37:31Z) - FAIRLEARN:Configurable and Interpretable Algorithmic Fairness [1.2183405753834557]
There is a need to mitigate any bias arising from either training samples or implicit assumptions made about the data samples.
Many approaches have been proposed to make learning algorithms fair by detecting and mitigating bias in different stages of optimization.
We propose the FAIRLEARN procedure that produces a fair algorithm by incorporating user constraints into the optimization procedure.
arXiv Detail & Related papers (2021-11-17T03:07:18Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Fair Meta-Learning For Few-Shot Classification [7.672769260569742]
A machine learning algorithm trained on biased data tends to make unfair predictions.
We propose a novel fair fast-adapted few-shot meta-learning approach that efficiently mitigates biases during meta-train.
We empirically demonstrate that our proposed approach efficiently mitigates biases on model output and generalizes both accuracy and fairness to unseen tasks.
arXiv Detail & Related papers (2020-09-23T22:33:47Z) - How fair can we go in machine learning? Assessing the boundaries of
fairness in decision trees [0.12891210250935145]
We present the first methodology that allows to explore the statistical limits of bias mitigation interventions.
We focus our study on decision tree classifiers since they are widely accepted in machine learning.
We conclude experimentally that our method can optimize decision tree models by being fairer with a small cost of the classification error.
arXiv Detail & Related papers (2020-06-22T16:28:26Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.