Distributive Justice as the Foundational Premise of Fair ML:
Unification, Extension, and Interpretation of Group Fairness Metrics
- URL: http://arxiv.org/abs/2206.02897v3
- Date: Tue, 2 May 2023 07:10:40 GMT
- Title: Distributive Justice as the Foundational Premise of Fair ML:
Unification, Extension, and Interpretation of Group Fairness Metrics
- Authors: Joachim Baumann, Corinna Hertweck, Michele Loi, Christoph Heitz
- Abstract summary: Group fairness metrics are an established way of assessing the fairness of prediction-based decision-making systems.
We propose a comprehensive framework for group fairness metrics, which links them to more theories of distributive justice.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Group fairness metrics are an established way of assessing the fairness of
prediction-based decision-making systems. However, these metrics are still
insufficiently linked to philosophical theories, and their moral meaning is
often unclear. In this paper, we propose a comprehensive framework for group
fairness metrics, which links them to more theories of distributive justice.
The different group fairness metrics differ in their choices about how to
measure the benefit or harm of a decision for the affected individuals, and
what moral claims to benefits are assumed. Our unifying framework reveals the
normative choices associated with standard group fairness metrics and allows an
interpretation of their moral substance. In addition, this broader view
provides a structure for the expansion of standard fairness metrics that we
find in the literature. This expansion allows addressing several criticisms of
standard group fairness metrics, specifically: (1) they are parity-based, i.e.,
they demand some form of equality between groups, which may sometimes be
detrimental to marginalized groups; (2) they only compare decisions across
groups but not the resulting consequences for these groups; and (3) the full
breadth of the distributive justice literature is not sufficiently represented.
Related papers
- Subjective fairness in algorithmic decision-support [0.0]
The treatment of fairness in decision-making literature usually involves quantifying fairness using objective measures.
This work takes a critical stance to highlight the limitations of these approaches using sociological insights.
We redefine fairness as a subjective property moving from a top-down to a bottom-up approach.
arXiv Detail & Related papers (2024-06-28T14:37:39Z) - FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods [84.1077756698332]
This paper introduces the Fair Fairness Benchmark (textsfFFB), a benchmarking framework for in-processing group fairness methods.
We provide a comprehensive analysis of state-of-the-art methods to ensure different notions of group fairness.
arXiv Detail & Related papers (2023-06-15T19:51:28Z) - Counterpart Fairness -- Addressing Systematic between-group Differences in Fairness Evaluation [17.495053606192375]
When using machine learning to aid decision-making, it is critical to ensure that an algorithmic decision is fair and does not discriminate against specific individuals/groups.
Existing group fairness methods aim to ensure equal outcomes across groups delineated by protected variables like race or gender.
In cases where systematic differences between groups play a significant role in outcomes, these methods may overlook the influence of non-protected variables.
arXiv Detail & Related papers (2023-05-29T15:41:12Z) - Fair Without Leveling Down: A New Intersectional Fairness Definition [1.0958014189747356]
We propose a new definition called the $alpha$-Intersectional Fairness, which combines the absolute and the relative performance across sensitive groups.
We benchmark multiple popular in-processing fair machine learning approaches using our new fairness definition and show that they do not achieve any improvement over a simple baseline.
arXiv Detail & Related papers (2023-05-21T16:15:12Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Learning Informative Representation for Fairness-aware Multivariate
Time-series Forecasting: A Group-based Perspective [50.093280002375984]
Performance unfairness among variables widely exists in multivariate time series (MTS) forecasting models.
We propose a novel framework, named FairFor, for fairness-aware MTS forecasting.
arXiv Detail & Related papers (2023-01-27T04:54:12Z) - Joint Multisided Exposure Fairness for Recommendation [76.75990595228666]
This paper formalizes a family of exposure fairness metrics that model the problem jointly from the perspective of both the consumers and producers.
Specifically, we consider group attributes for both types of stakeholders to identify and mitigate fairness concerns that go beyond individual users and items towards more systemic biases in recommendation.
arXiv Detail & Related papers (2022-04-29T19:13:23Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Fairness Through Counterfactual Utilities [0.0]
Group fairness definitions such as Demographic Parity and Equal Opportunity make assumptions about the underlying decision-problem that restrict them to classification problems.
We provide a generalized set of group fairness definitions that unambiguously extend to all machine learning environments.
arXiv Detail & Related papers (2021-08-11T16:51:27Z) - Fairness for Image Generation with Uncertain Sensitive Attributes [97.81354305427871]
This work tackles the issue of fairness in the context of generative procedures, such as image super-resolution.
While traditional group fairness definitions are typically defined with respect to specified protected groups, we emphasize that there are no ground truth identities.
We show that the natural extension of demographic parity is strongly dependent on the grouping, and emphimpossible to achieve obliviously.
arXiv Detail & Related papers (2021-06-23T06:17:17Z) - Characterizing Intersectional Group Fairness with Worst-Case Comparisons [0.0]
We discuss why fairness metrics need to be looked at under the lens of intersectionality.
We suggest a simple worst case comparison method to expand the definitions of existing group fairness metrics.
We conclude with the social, legal and political framework to handle intersectional fairness in the modern context.
arXiv Detail & Related papers (2021-01-05T17:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.