Counterfactual Fair Opportunity: Measuring Decision Model Fairness with
Counterfactual Reasoning
- URL: http://arxiv.org/abs/2302.08158v1
- Date: Thu, 16 Feb 2023 09:13:53 GMT
- Title: Counterfactual Fair Opportunity: Measuring Decision Model Fairness with
Counterfactual Reasoning
- Authors: Giandomenico Cornacchia, Vito Walter Anelli, Fedelucio Narducci,
Azzurra Ragone, Eugenio Di Sciascio
- Abstract summary: This work aims to unveil unfair model behaviors using counterfactual reasoning in the case of fairness under unawareness setting.
A counterfactual version of equal opportunity named counterfactual fair opportunity is defined and two novel metrics that analyze the sensitive information of counterfactual samples are introduced.
- Score: 5.626570248105078
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The increasing application of Artificial Intelligence and Machine Learning
models poses potential risks of unfair behavior and, in light of recent
regulations, has attracted the attention of the research community. Several
researchers focused on seeking new fairness definitions or developing
approaches to identify biased predictions. However, none try to exploit the
counterfactual space to this aim. In that direction, the methodology proposed
in this work aims to unveil unfair model behaviors using counterfactual
reasoning in the case of fairness under unawareness setting. A counterfactual
version of equal opportunity named counterfactual fair opportunity is defined
and two novel metrics that analyze the sensitive information of counterfactual
samples are introduced. Experimental results on three different datasets show
the efficacy of our methodologies and our metrics, disclosing the unfair
behavior of classic machine learning and debiasing models.
Related papers
- When Fairness Meets Privacy: Exploring Privacy Threats in Fair Binary Classifiers via Membership Inference Attacks [17.243744418309593]
We propose an efficient MIA method against fairness-enhanced models based on fairness discrepancy results.
We also explore potential strategies for mitigating privacy leakages.
arXiv Detail & Related papers (2023-11-07T10:28:17Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - Optimising Equal Opportunity Fairness in Model Training [60.0947291284978]
Existing debiasing methods, such as adversarial training and removing protected information from representations, have been shown to reduce bias.
We propose two novel training objectives which directly optimise for the widely-used criterion of it equal opportunity, and show that they are effective in reducing bias while maintaining high performance over two classification tasks.
arXiv Detail & Related papers (2022-05-05T01:57:58Z) - Cross-model Fairness: Empirical Study of Fairness and Ethics Under Model Multiplicity [10.144058870887061]
We argue that individuals can be harmed when one predictor is chosen ad hoc from a group of equally well performing models.
Our findings suggest that such unfairness can be readily found in real life and it may be difficult to mitigate by technical means alone.
arXiv Detail & Related papers (2022-03-14T14:33:39Z) - Explain, Edit, and Understand: Rethinking User Study Design for
Evaluating Model Explanations [97.91630330328815]
We conduct a crowdsourcing study, where participants interact with deception detection models that have been trained to distinguish between genuine and fake hotel reviews.
We observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control.
arXiv Detail & Related papers (2021-12-17T18:29:56Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Fairness in Semi-supervised Learning: Unlabeled Data Help to Reduce
Discrimination [53.3082498402884]
A growing specter in the rise of machine learning is whether the decisions made by machine learning models are fair.
We present a framework of fair semi-supervised learning in the pre-processing phase, including pseudo labeling to predict labels for unlabeled data.
A theoretical decomposition analysis of bias, variance and noise highlights the different sources of discrimination and the impact they have on fairness in semi-supervised learning.
arXiv Detail & Related papers (2020-09-25T05:48:56Z) - Improving Fair Predictions Using Variational Inference In Causal Models [8.557308138001712]
The importance of algorithmic fairness grows with the increasing impact machine learning has on people's lives.
Recent work on fairness metrics shows the need for causal reasoning in fairness constraints.
This research aims to contribute to machine learning techniques which honour our ethical and legal boundaries.
arXiv Detail & Related papers (2020-08-25T08:27:11Z) - Fair inference on error-prone outcomes [0.0]
We show that, when an error-prone proxy target is used, existing methods to assess and calibrate fairness criteria do not extend to the true target variable of interest.
We suggest a framework resulting from the combination of fair ML methods and measurement models found in the statistical literature.
In a healthcare decision problem, we find that using a latent variable model to account for measurement error removes the unfairness detected previously.
arXiv Detail & Related papers (2020-03-17T10:31:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.