Measuring, Interpreting, and Improving Fairness of Algorithms using
Causal Inference and Randomized Experiments
- URL: http://arxiv.org/abs/2309.01780v1
- Date: Mon, 4 Sep 2023 19:45:18 GMT
- Title: Measuring, Interpreting, and Improving Fairness of Algorithms using
Causal Inference and Randomized Experiments
- Authors: James Enouen and Tianshu Sun and Yan Liu
- Abstract summary: We present an algorithm-agnostic framework (MIIF) to Measure, Interpret, and Improve the Fairness of an algorithmic decision.
We measure the algorithm bias using randomized experiments, which enables the simultaneous measurement of disparate treatment, disparate impact, and economic value.
We also develop an explainable machine learning model which accurately interprets and distills the beliefs of a blackbox algorithm.
- Score: 8.62694928567939
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Algorithm fairness has become a central problem for the broad adoption of
artificial intelligence. Although the past decade has witnessed an explosion of
excellent work studying algorithm biases, achieving fairness in real-world AI
production systems has remained a challenging task. Most existing works fail to
excel in practical applications since either they have conflicting measurement
techniques and/ or heavy assumptions, or require code-access of the production
models, whereas real systems demand an easy-to-implement measurement framework
and a systematic way to correct the detected sources of bias.
In this paper, we leverage recent advances in causal inference and
interpretable machine learning to present an algorithm-agnostic framework
(MIIF) to Measure, Interpret, and Improve the Fairness of an algorithmic
decision. We measure the algorithm bias using randomized experiments, which
enables the simultaneous measurement of disparate treatment, disparate impact,
and economic value. Furthermore, using modern interpretability techniques, we
develop an explainable machine learning model which accurately interprets and
distills the beliefs of a blackbox algorithm. Altogether, these techniques
create a simple and powerful toolset for studying algorithm fairness,
especially for understanding the cost of fairness in practical applications
like e-commerce and targeted advertising, where industry A/B testing is already
abundant.
Related papers
- Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Can Ensembling Pre-processing Algorithms Lead to Better Machine Learning
Fairness? [8.679212948810916]
Several fairness pre-processing algorithms are available to alleviate implicit biases during model training.
These algorithms employ different concepts of fairness, often leading to conflicting strategies with consequential trade-offs between fairness and accuracy.
We evaluate three popular fairness pre-processing algorithms and investigate the potential for combining all algorithms into a more robust pre-processing ensemble.
arXiv Detail & Related papers (2022-12-05T21:54:29Z) - Non-Clairvoyant Scheduling with Predictions Revisited [77.86290991564829]
In non-clairvoyant scheduling, the task is to find an online strategy for scheduling jobs with a priori unknown processing requirements.
We revisit this well-studied problem in a recently popular learning-augmented setting that integrates (untrusted) predictions in algorithm design.
We show that these predictions have desired properties, admit a natural error measure as well as algorithms with strong performance guarantees.
arXiv Detail & Related papers (2022-02-21T13:18:11Z) - A Framework for Fairness: A Systematic Review of Existing Fair AI
Solutions [4.594159253008448]
A large portion of fairness research has gone to producing tools that machine learning practitioners can use to audit for bias while designing their algorithms.
There is a lack of application of these fairness solutions in practice.
This review provides an in-depth summary of the algorithmic bias issues that have been defined and the fairness solution space that has been proposed.
arXiv Detail & Related papers (2021-12-10T17:51:20Z) - FAIRLEARN:Configurable and Interpretable Algorithmic Fairness [1.2183405753834557]
There is a need to mitigate any bias arising from either training samples or implicit assumptions made about the data samples.
Many approaches have been proposed to make learning algorithms fair by detecting and mitigating bias in different stages of optimization.
We propose the FAIRLEARN procedure that produces a fair algorithm by incorporating user constraints into the optimization procedure.
arXiv Detail & Related papers (2021-11-17T03:07:18Z) - MURAL: Meta-Learning Uncertainty-Aware Rewards for Outcome-Driven
Reinforcement Learning [65.52675802289775]
We show that an uncertainty aware classifier can solve challenging reinforcement learning problems.
We propose a novel method for computing the normalized maximum likelihood (NML) distribution.
We show that the resulting algorithm has a number of intriguing connections to both count-based exploration methods and prior algorithms for learning reward functions.
arXiv Detail & Related papers (2021-07-15T08:19:57Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Individually Fair Gradient Boosting [86.1984206610373]
We consider the task of enforcing individual fairness in gradient boosting.
We show that our algorithm converges globally and generalizes.
We also demonstrate the efficacy of our algorithm on three ML problems susceptible to algorithmic bias.
arXiv Detail & Related papers (2021-03-31T03:06:57Z) - Coping with Mistreatment in Fair Algorithms [1.2183405753834557]
We study the algorithmic fairness in a supervised learning setting and examine the effect of optimizing a classifier for the Equal Opportunity metric.
We propose a conceptually simple method to mitigate this bias.
We rigorously analyze the proposed method and evaluate it on several real world datasets demonstrating its efficacy.
arXiv Detail & Related papers (2021-02-22T03:26:06Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z) - Algorithmic Fairness [11.650381752104298]
It is crucial to develop AI algorithms that are not only accurate but also objective and fair.
Recent studies have shown that algorithmic decision-making may be inherently prone to unfairness.
arXiv Detail & Related papers (2020-01-21T19:01:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.