Transparency Tools for Fairness in AI (Luskin)
- URL: http://arxiv.org/abs/2007.04484v1
- Date: Thu, 9 Jul 2020 00:21:54 GMT
- Title: Transparency Tools for Fairness in AI (Luskin)
- Authors: Mingliang Chen, Aria Shahverdi, Sarah Anderson, Se Yong Park, Justin
Zhang, Dana Dachman-Soled, Kristin Lauter, Min Wu
- Abstract summary: We propose new tools for assessing and correcting fairness and bias in AI algorithms.
The three tools are: - A new definition of fairness called "controlled fairness" with respect to choices of protected features and filters.
They are useful for understanding various dimensions of bias, and that in practice the algorithms are effective in starkly reducing a given observed bias when tested on new data.
- Score: 12.158766675246337
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose new tools for policy-makers to use when assessing and correcting
fairness and bias in AI algorithms. The three tools are:
- A new definition of fairness called "controlled fairness" with respect to
choices of protected features and filters. The definition provides a simple
test of fairness of an algorithm with respect to a dataset. This notion of
fairness is suitable in cases where fairness is prioritized over accuracy, such
as in cases where there is no "ground truth" data, only data labeled with past
decisions (which may have been biased).
- Algorithms for retraining a given classifier to achieve "controlled
fairness" with respect to a choice of features and filters. Two algorithms are
presented, implemented and tested. These algorithms require training two
different models in two stages. We experiment with combinations of various
types of models for the first and second stage and report on which combinations
perform best in terms of fairness and accuracy.
- Algorithms for adjusting model parameters to achieve a notion of fairness
called "classification parity". This notion of fairness is suitable in cases
where accuracy is prioritized. Two algorithms are presented, one which assumes
that protected features are accessible to the model during testing, and one
which assumes protected features are not accessible during testing.
We evaluate our tools on three different publicly available datasets. We find
that the tools are useful for understanding various dimensions of bias, and
that in practice the algorithms are effective in starkly reducing a given
observed bias when tested on new data.
Related papers
- Fairness Without Harm: An Influence-Guided Active Sampling Approach [32.173195437797766]
We aim to train models that mitigate group fairness disparity without causing harm to model accuracy.
The current data acquisition methods, such as fair active learning approaches, typically require annotating sensitive attributes.
We propose a tractable active data sampling algorithm that does not rely on training group annotations.
arXiv Detail & Related papers (2024-02-20T07:57:38Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Boosting Fair Classifier Generalization through Adaptive Priority Reweighing [59.801444556074394]
A performance-promising fair algorithm with better generalizability is needed.
This paper proposes a novel adaptive reweighing method to eliminate the impact of the distribution shifts between training and test data on model generalizability.
arXiv Detail & Related papers (2023-09-15T13:04:55Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - FAIRLEARN:Configurable and Interpretable Algorithmic Fairness [1.2183405753834557]
There is a need to mitigate any bias arising from either training samples or implicit assumptions made about the data samples.
Many approaches have been proposed to make learning algorithms fair by detecting and mitigating bias in different stages of optimization.
We propose the FAIRLEARN procedure that produces a fair algorithm by incorporating user constraints into the optimization procedure.
arXiv Detail & Related papers (2021-11-17T03:07:18Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Mitigating Bias in Set Selection with Noisy Protected Attributes [16.882719401742175]
We show that in the presence of noisy protected attributes, in attempting to increase fairness without considering noise, one can, in fact, decrease the fairness of the result!
We formulate a denoised'' selection problem which functions for a large class of fairness metrics.
Our empirical results show that this approach can produce subsets which significantly improve the fairness metrics despite the presence of noisy protected attributes.
arXiv Detail & Related papers (2020-11-09T06:45:15Z) - Metrics and methods for a systematic comparison of fairness-aware
machine learning algorithms [0.0]
This study is the most comprehensive of its kind.
It considers fairness, predictive-performance, calibration quality, and speed of 28 different modelling pipelines.
We also found that fairness-aware algorithms can induce fairness without material drops in predictive power.
arXiv Detail & Related papers (2020-10-08T13:58:09Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z) - Causal Feature Selection for Algorithmic Fairness [61.767399505764736]
We consider fairness in the integration component of data management.
We propose an approach to identify a sub-collection of features that ensure the fairness of the dataset.
arXiv Detail & Related papers (2020-06-10T20:20:10Z) - Genetic programming approaches to learning fair classifiers [4.901632310846025]
We discuss current approaches to fairness and motivate proposals that incorporate fairness into genetic programming for classification.
The first is to incorporate a fairness objective into multi-objective optimization.
The second is to adapt lexicase selection to define cases dynamically over intersections of protected groups.
arXiv Detail & Related papers (2020-04-28T04:20:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.