A Systematic Approach to Group Fairness in Automated Decision Making
- URL: http://arxiv.org/abs/2109.04230v1
- Date: Thu, 9 Sep 2021 12:47:15 GMT
- Title: A Systematic Approach to Group Fairness in Automated Decision Making
- Authors: Corinna Hertweck and Christoph Heitz
- Abstract summary: The goal of this paper is to provide data scientists with an accessible introduction to group fairness metrics.
We will do this by considering in which sense socio-demographic groups are compared for making a statement on fairness.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While the field of algorithmic fairness has brought forth many ways to
measure and improve the fairness of machine learning models, these findings are
still not widely used in practice. We suspect that one reason for this is that
the field of algorithmic fairness came up with a lot of definitions of
fairness, which are difficult to navigate. The goal of this paper is to provide
data scientists with an accessible introduction to group fairness metrics and
to give some insight into the philosophical reasoning for caring about these
metrics. We will do this by considering in which sense socio-demographic groups
are compared for making a statement on fairness.
Related papers
- A Benchmark for Fairness-Aware Graph Learning [58.515305543487386]
We present an extensive benchmark on ten representative fairness-aware graph learning methods.
Our in-depth analysis reveals key insights into the strengths and limitations of existing methods.
arXiv Detail & Related papers (2024-07-16T18:43:43Z) - Subjective fairness in algorithmic decision-support [0.0]
The treatment of fairness in decision-making literature usually involves quantifying fairness using objective measures.
This work takes a critical stance to highlight the limitations of these approaches using sociological insights.
We redefine fairness as a subjective property moving from a top-down to a bottom-up approach.
arXiv Detail & Related papers (2024-06-28T14:37:39Z) - FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods [84.1077756698332]
This paper introduces the Fair Fairness Benchmark (textsfFFB), a benchmarking framework for in-processing group fairness methods.
We provide a comprehensive analysis of state-of-the-art methods to ensure different notions of group fairness.
arXiv Detail & Related papers (2023-06-15T19:51:28Z) - The Flawed Foundations of Fair Machine Learning [0.0]
We show that there is a trade-off between statistically accurate outcomes and group similar outcomes in any data setting where group disparities exist.
We introduce a proof-of-concept evaluation to aid researchers and designers in understanding the relationship between statistically accurate outcomes and group similar outcomes.
arXiv Detail & Related papers (2023-06-02T10:07:12Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Everything is Relative: Understanding Fairness with Optimal Transport [1.160208922584163]
We present an optimal transport-based approach to fairness that offers an interpretable and quantifiable exploration of bias and its structure.
Our framework is able to recover well known examples of algorithmic discrimination, detect unfairness when other metrics fail, and explore recourse opportunities.
arXiv Detail & Related papers (2021-02-20T13:57:53Z) - Two Simple Ways to Learn Individual Fairness Metrics from Data [47.6390279192406]
Individual fairness is an intuitive definition of algorithmic fairness that addresses some of the drawbacks of group fairness.
The lack of a widely accepted fair metric for many ML tasks is the main barrier to broader adoption of individual fairness.
We show empirically that fair training with the learned metrics leads to improved fairness on three machine learning tasks susceptible to gender and racial biases.
arXiv Detail & Related papers (2020-06-19T23:47:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.