LUCID: Exposing Algorithmic Bias through Inverse Design
- URL: http://arxiv.org/abs/2208.12786v1
- Date: Fri, 26 Aug 2022 17:06:35 GMT
- Title: LUCID: Exposing Algorithmic Bias through Inverse Design
- Authors: Carmen Mazijn, Carina Prunkl, Andres Algaba, Jan Danckaert, Vincent
Ginis
- Abstract summary: We argue that output metrics encounter intrinsic obstacles and present a complementary approach that aligns with the increasing focus on equality of treatment.
We generate a canonical set that shows the desired inputs for a model given a preferred output.
We evaluate LUCID on the UCI Adult and COMPAS data sets and find that some biases detected by a canonical set differ from those of output metrics.
- Score: 1.5257247496416746
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: AI systems can create, propagate, support, and automate bias in
decision-making processes. To mitigate biased decisions, we both need to
understand the origin of the bias and define what it means for an algorithm to
make fair decisions. Most group fairness notions assess a model's equality of
outcome by computing statistical metrics on the outputs. We argue that these
output metrics encounter intrinsic obstacles and present a complementary
approach that aligns with the increasing focus on equality of treatment. By
Locating Unfairness through Canonical Inverse Design (LUCID), we generate a
canonical set that shows the desired inputs for a model given a preferred
output. The canonical set reveals the model's internal logic and exposes
potential unethical biases by repeatedly interrogating the decision-making
process. We evaluate LUCID on the UCI Adult and COMPAS data sets and find that
some biases detected by a canonical set differ from those of output metrics.
The results show that by shifting the focus towards equality of treatment and
looking into the algorithm's internal workings, the canonical sets are a
valuable addition to the toolbox of algorithmic fairness evaluation.
Related papers
- Differentially Private Post-Processing for Fair Regression [13.855474876965557]
Our algorithm can be applied to post-process any given regressor to improve fairness by remapping its outputs.
We analyze the sample complexity of our algorithm and provide fairness guarantee, revealing a trade-off between the statistical bias and variance induced from the choice of the number of bins in the histogram.
arXiv Detail & Related papers (2024-05-07T06:09:37Z) - Optimal Group Fair Classifiers from Linear Post-Processing [10.615965454674901]
We propose a post-processing algorithm for fair classification that mitigates model bias under a unified family of group fairness criteria.
It achieves fairness by re-calibrating the output score of the given base model with a "fairness cost" -- a linear combination of the (predicted) group memberships.
arXiv Detail & Related papers (2024-05-07T05:58:44Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - LUCID-GAN: Conditional Generative Models to Locate Unfairness [1.5257247496416746]
We present LUCID-GAN, which generates canonical inputs via a conditional generative model instead of gradient-based inverse design.
We empirically evaluate LUCID-GAN on the UCI Adult and COMPAS data sets and show that it allows for detecting unethical biases in black-box models without requiring access to the training data.
arXiv Detail & Related papers (2023-07-28T10:37:49Z) - Correcting Underrepresentation and Intersectional Bias for Classification [49.1574468325115]
We consider the problem of learning from data corrupted by underrepresentation bias.
We show that with a small amount of unbiased data, we can efficiently estimate the group-wise drop-out rates.
We show that our algorithm permits efficient learning for model classes of finite VC dimension.
arXiv Detail & Related papers (2023-06-19T18:25:44Z) - Bounding Counterfactuals under Selection Bias [60.55840896782637]
We propose a first algorithm to address both identifiable and unidentifiable queries.
We prove that, in spite of the missingness induced by the selection bias, the likelihood of the available data is unimodal.
arXiv Detail & Related papers (2022-07-26T10:33:10Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Algorithmic Fairness Verification with Graphical Models [24.8005399877574]
We propose an efficient fairness verifier, called FVGM, that encodes correlations among features as a Bayesian network.
We show that FVGM leads to an accurate and scalable assessment for more diverse families of fairness-enhancing algorithms.
arXiv Detail & Related papers (2021-09-20T12:05:14Z) - Metrics and methods for a systematic comparison of fairness-aware
machine learning algorithms [0.0]
This study is the most comprehensive of its kind.
It considers fairness, predictive-performance, calibration quality, and speed of 28 different modelling pipelines.
We also found that fairness-aware algorithms can induce fairness without material drops in predictive power.
arXiv Detail & Related papers (2020-10-08T13:58:09Z) - Beyond Individual and Group Fairness [90.4666341812857]
We present a new data-driven model of fairness that is guided by the unfairness complaints received by the system.
Our model supports multiple fairness criteria and takes into account their potential incompatibilities.
arXiv Detail & Related papers (2020-08-21T14:14:44Z) - Causal Feature Selection for Algorithmic Fairness [61.767399505764736]
We consider fairness in the integration component of data management.
We propose an approach to identify a sub-collection of features that ensure the fairness of the dataset.
arXiv Detail & Related papers (2020-06-10T20:20:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.