Fairness in KI-Systemen
- URL: http://arxiv.org/abs/2307.08486v1
- Date: Mon, 17 Jul 2023 13:48:27 GMT
- Title: Fairness in KI-Systemen
- Authors: Janine Strotherm and Alissa M\"uller and Barbara Hammer and Benjamin
Paa{\ss}en
- Abstract summary: The more AI-assisted decisions affect people's lives, the more important the fairness of such decisions becomes.
In this chapter, we provide an introduction to research on fairness in machine learning.
- Score: 7.272024968089535
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The more AI-assisted decisions affect people's lives, the more important the
fairness of such decisions becomes. In this chapter, we provide an introduction
to research on fairness in machine learning. We explain the main fairness
definitions and strategies for achieving fairness using concrete examples and
place fairness research in the European context. Our contribution is aimed at
an interdisciplinary audience and therefore avoids mathematical formulation but
emphasizes visualizations and examples.
--
Je mehr KI-gest\"utzte Entscheidungen das Leben von Menschen betreffen, desto
wichtiger ist die Fairness solcher Entscheidungen. In diesem Kapitel geben wir
eine Einf\"uhrung in die Forschung zu Fairness im maschinellen Lernen. Wir
erkl\"aren die wesentlichen Fairness-Definitionen und Strategien zur Erreichung
von Fairness anhand konkreter Beispiele und ordnen die Fairness-Forschung in
den europ\"aischen Kontext ein. Unser Beitrag richtet sich dabei an ein
interdisziplin\"ares Publikum und verzichtet daher auf die mathematische
Formulierung sondern betont Visualisierungen und Beispiele.
Related papers
- A Benchmark for Fairness-Aware Graph Learning [58.515305543487386]
We present an extensive benchmark on ten representative fairness-aware graph learning methods.
Our in-depth analysis reveals key insights into the strengths and limitations of existing methods.
arXiv Detail & Related papers (2024-07-16T18:43:43Z) - Subjective fairness in algorithmic decision-support [0.0]
The treatment of fairness in decision-making literature usually involves quantifying fairness using objective measures.
This work takes a critical stance to highlight the limitations of these approaches using sociological insights.
We redefine fairness as a subjective property moving from a top-down to a bottom-up approach.
arXiv Detail & Related papers (2024-06-28T14:37:39Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - What-is and How-to for Fairness in Machine Learning: A Survey,
Reflection, and Perspective [13.124434298120494]
We review and reflect on various fairness notions previously proposed in machine learning literature.
We also consider the long-term impact that is induced by current prediction and decision.
This paper demonstrates the importance of matching the mission (which kind of fairness one would like to enforce) and the means (which spectrum of fairness analysis is of interest) to fulfill the intended purpose.
arXiv Detail & Related papers (2022-06-08T18:05:46Z) - The zoo of Fairness metrics in Machine Learning [62.997667081978825]
In recent years, the problem of addressing fairness in Machine Learning (ML) and automatic decision-making has attracted a lot of attention.
A plethora of different definitions of fairness in ML have been proposed, that consider different notions of what is a "fair decision" in situations impacting individuals in the population.
In this work, we try to make some order out of this zoo of definitions.
arXiv Detail & Related papers (2021-06-01T13:19:30Z) - Who Gets What, According to Whom? An Analysis of Fairness Perceptions in
Service Allocation [2.69180747382622]
We experimentally explore five novel research questions at the intersection of the "Who," "What," and "How" of fairness perceptions.
Our results suggest that the "Who" and "What," at least, matter in ways that are 1) not easily explained by any one theoretical perspective, 2) have critical implications for how perceptions of fairness should be measured and/or integrated into algorithmic decision-making systems.
arXiv Detail & Related papers (2021-05-10T15:31:22Z) - Machine learning fairness notions: Bridging the gap with real-world
applications [4.157415305926584]
Fairness emerged as an important requirement to guarantee that Machine Learning predictive systems do not discriminate against specific individuals or entire sub-populations.
This paper is a survey that illustrates the subtleties between fairness notions through a large number of examples and scenarios.
arXiv Detail & Related papers (2020-06-30T13:01:06Z) - Two Simple Ways to Learn Individual Fairness Metrics from Data [47.6390279192406]
Individual fairness is an intuitive definition of algorithmic fairness that addresses some of the drawbacks of group fairness.
The lack of a widely accepted fair metric for many ML tasks is the main barrier to broader adoption of individual fairness.
We show empirically that fair training with the learned metrics leads to improved fairness on three machine learning tasks susceptible to gender and racial biases.
arXiv Detail & Related papers (2020-06-19T23:47:15Z) - Statistical Equity: A Fairness Classification Objective [6.174903055136084]
We propose a new fairness definition motivated by the principle of equity.
We formalize our definition of fairness, and motivate it with its appropriate contexts.
We perform multiple automatic and human evaluations to show the effectiveness of our definition.
arXiv Detail & Related papers (2020-05-14T23:19:38Z) - On Consequentialism and Fairness [64.35872952140677]
We provide a consequentialist critique of common definitions of fairness within machine learning.
We conclude with a broader discussion of the issues of learning and randomization.
arXiv Detail & Related papers (2020-01-02T05:39:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.