Fairness Explainability using Optimal Transport with Applications in
Image Classification
- URL: http://arxiv.org/abs/2308.11090v2
- Date: Tue, 31 Oct 2023 15:07:07 GMT
- Title: Fairness Explainability using Optimal Transport with Applications in
Image Classification
- Authors: Philipp Ratz and Fran\c{c}ois Hu and Arthur Charpentier
- Abstract summary: We propose a comprehensive approach to uncover the causes of discrimination in Machine Learning applications.
We leverage Wasserstein barycenters to achieve fair predictions and introduce an extension to pinpoint bias-associated regions.
This allows us to derive a cohesive system which uses the enforced fairness to measure each features influence emphon the bias.
- Score: 0.46040036610482665
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ensuring trust and accountability in Artificial Intelligence systems demands
explainability of its outcomes. Despite significant progress in Explainable AI,
human biases still taint a substantial portion of its training data, raising
concerns about unfairness or discriminatory tendencies. Current approaches in
the field of Algorithmic Fairness focus on mitigating such biases in the
outcomes of a model, but few attempts have been made to try to explain
\emph{why} a model is biased. To bridge this gap between the two fields, we
propose a comprehensive approach that uses optimal transport theory to uncover
the causes of discrimination in Machine Learning applications, with a
particular emphasis on image classification. We leverage Wasserstein
barycenters to achieve fair predictions and introduce an extension to pinpoint
bias-associated regions. This allows us to derive a cohesive system which uses
the enforced fairness to measure each features influence \emph{on} the bias.
Taking advantage of this interplay of enforcing and explaining fairness, our
method hold significant implications for the development of trustworthy and
unbiased AI systems, fostering transparency, accountability, and fairness in
critical decision-making scenarios across diverse domains.
Related papers
- What Hides behind Unfairness? Exploring Dynamics Fairness in Reinforcement Learning [52.51430732904994]
In reinforcement learning problems, agents must consider long-term fairness while maximizing returns.
Recent works have proposed many different types of fairness notions, but how unfairness arises in RL problems remains unclear.
We introduce a novel notion called dynamics fairness, which explicitly captures the inequality stemming from environmental dynamics.
arXiv Detail & Related papers (2024-04-16T22:47:59Z) - Fair Clustering: A Causal Perspective [5.885238773559016]
We show that optimising for non-causal fairness notions can paradoxically induce direct discriminatory effects from a causal standpoint.
We present a clustering approach that incorporates causal fairness metrics to provide a more nuanced approach to fairness in unsupervised learning.
arXiv Detail & Related papers (2023-12-14T15:58:03Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Fair Enough: Standardizing Evaluation and Model Selection for Fairness
Research in NLP [64.45845091719002]
Modern NLP systems exhibit a range of biases, which a growing literature on model debiasing attempts to correct.
This paper seeks to clarify the current situation and plot a course for meaningful progress in fair learning.
arXiv Detail & Related papers (2023-02-11T14:54:00Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - Fairness and Explainability: Bridging the Gap Towards Fair Model
Explanations [12.248793742165278]
We bridge the gap between fairness and explainability by presenting a novel perspective of procedure-oriented fairness based on explanations.
We propose a Comprehensive Fairness Algorithm (CFA), which simultaneously fulfills multiple objectives - improving traditional fairness, satisfying explanation fairness, and maintaining the utility performance.
arXiv Detail & Related papers (2022-12-07T18:35:54Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Fairness in Machine Learning [15.934879442202785]
We show how causal Bayesian networks can play an important role to reason about and deal with fairness.
We present a unified framework that encompasses methods that can deal with different settings and fairness criteria.
arXiv Detail & Related papers (2020-12-31T18:38:58Z) - FAIR: Fair Adversarial Instance Re-weighting [0.7829352305480285]
We propose a Fair Adrial Instance Re-weighting (FAIR) method, which uses adversarial training to learn instance weighting function that ensures fair predictions.
To the best of our knowledge, this is the first model that merges reweighting and adversarial approaches by means of a weighting function that can provide interpretable information about fairness of individual instances.
arXiv Detail & Related papers (2020-11-15T10:48:56Z) - All of the Fairness for Edge Prediction with Optimal Transport [11.51786288978429]
We study the problem of fairness for the task of edge prediction in graphs.
We propose an embedding-agnostic repairing procedure for the adjacency matrix of an arbitrary graph with a trade-off between the group and individual fairness.
arXiv Detail & Related papers (2020-10-30T15:33:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.