Towards the Right Kind of Fairness in AI
- URL: http://arxiv.org/abs/2102.08453v1
- Date: Tue, 16 Feb 2021 21:12:30 GMT
- Title: Towards the Right Kind of Fairness in AI
- Authors: Boris Ruf and Marcin Detyniecki
- Abstract summary: "Fairness Compass" is a tool which makes identifying the most appropriate fairness metric for a given system a simple, straightforward procedure.
We argue that documenting the reasoning behind the respective decisions in the course of this process can help to build trust from the user.
- Score: 3.723553383515688
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To implement fair machine learning in a sustainable way, identifying the
right fairness definition is key. However, fairness is a concept of justice,
and various definitions exist. Some of them are in conflict with each other and
there is no uniformly accepted notion of fairness. The most appropriate
fairness definition for an artificial intelligence system is often a matter of
application and the right choice depends on ethical standards and legal
requirements. In the absence of officially binding rules, the objective of this
document is to structure the complex landscape of existing fairness
definitions. We propose the "Fairness Compass", a tool which formalises the
selection process and makes identifying the most appropriate fairness metric
for a given system a simple, straightforward procedure. We further argue that
documenting the reasoning behind the respective decisions in the course of this
process can help to build trust from the user through explaining and justifying
the implemented fairness.
Related papers
- What's Distributive Justice Got to Do with It? Rethinking Algorithmic Fairness from the Perspective of Approximate Justice [1.8434042562191815]
We argue that in the context of imperfect decision-making systems, we should not only care about what the ideal distribution of benefits/harms among individuals would look like.
This requires us to rethink the way in which we, as algorithmic fairness researchers, view distributive justice and use fairness criteria.
arXiv Detail & Related papers (2024-07-17T11:13:23Z) - AI Fairness in Practice [0.46671368497079174]
There is a broad spectrum of views across society on what the concept of fairness means and how it should be put to practice.
This workbook explores how a context-based approach to understanding AI Fairness can help project teams better identify, mitigate, and manage the many ways that unfair bias and discrimination can crop up across the AI project workflow.
arXiv Detail & Related papers (2024-02-19T23:02:56Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Fair Enough: Standardizing Evaluation and Model Selection for Fairness
Research in NLP [64.45845091719002]
Modern NLP systems exhibit a range of biases, which a growing literature on model debiasing attempts to correct.
This paper seeks to clarify the current situation and plot a course for meaningful progress in fair learning.
arXiv Detail & Related papers (2023-02-11T14:54:00Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Fairness for Image Generation with Uncertain Sensitive Attributes [97.81354305427871]
This work tackles the issue of fairness in the context of generative procedures, such as image super-resolution.
While traditional group fairness definitions are typically defined with respect to specified protected groups, we emphasize that there are no ground truth identities.
We show that the natural extension of demographic parity is strongly dependent on the grouping, and emphimpossible to achieve obliviously.
arXiv Detail & Related papers (2021-06-23T06:17:17Z) - Explaining how your AI system is fair [3.723553383515688]
We propose to use a decision tree as means to explain and justify the implemented kind of fairness to the end users.
We argue that specifying "fairness" for a given use case is the best way forward to maintain confidence in AI systems.
arXiv Detail & Related papers (2021-05-03T07:52:56Z) - Two Simple Ways to Learn Individual Fairness Metrics from Data [47.6390279192406]
Individual fairness is an intuitive definition of algorithmic fairness that addresses some of the drawbacks of group fairness.
The lack of a widely accepted fair metric for many ML tasks is the main barrier to broader adoption of individual fairness.
We show empirically that fair training with the learned metrics leads to improved fairness on three machine learning tasks susceptible to gender and racial biases.
arXiv Detail & Related papers (2020-06-19T23:47:15Z) - Algorithmic Decision Making with Conditional Fairness [48.76267073341723]
We define conditional fairness as a more sound fairness metric by conditioning on the fairness variables.
We propose a Derivable Conditional Fairness Regularizer (DCFR) to track the trade-off between precision and fairness of algorithmic decision making.
arXiv Detail & Related papers (2020-06-18T12:56:28Z) - Statistical Equity: A Fairness Classification Objective [6.174903055136084]
We propose a new fairness definition motivated by the principle of equity.
We formalize our definition of fairness, and motivate it with its appropriate contexts.
We perform multiple automatic and human evaluations to show the effectiveness of our definition.
arXiv Detail & Related papers (2020-05-14T23:19:38Z) - Abstracting Fairness: Oracles, Metrics, and Interpretability [21.59432019966861]
We examine what can be learned from a fairness oracle equipped with an underlying understanding of true'' fairness.
Our results have implications for interpretablity -- a highly desired but poorly defined property of classification systems.
arXiv Detail & Related papers (2020-04-04T03:14:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.