Subjective fairness in algorithmic decision-support
- URL: http://arxiv.org/abs/2407.01617v1
- Date: Fri, 28 Jun 2024 14:37:39 GMT
- Title: Subjective fairness in algorithmic decision-support
- Authors: Sarra Tajouri, Alexis Tsoukiàs,
- Abstract summary: The treatment of fairness in decision-making literature usually involves quantifying fairness using objective measures.
This work takes a critical stance to highlight the limitations of these approaches using sociological insights.
We redefine fairness as a subjective property moving from a top-down to a bottom-up approach.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The treatment of fairness in decision-making literature usually involves quantifying fairness using objective measures. This work takes a critical stance to highlight the limitations of these approaches (group fairness and individual fairness) using sociological insights. First, we expose how these metrics often fail to reflect societal realities. By neglecting crucial historical, cultural, and social factors, they fall short of capturing all discriminatory practices. Second, we redefine fairness as a subjective property moving from a top-down to a bottom-up approach. This shift allows the inclusion of diverse stakeholders perceptions, recognizing that fairness is not merely about objective metrics but also about individuals views on their treatment. Finally, we aim to use explanations as a mean to achieve fairness. Our approach employs explainable clustering to form groups based on individuals subjective perceptions to ensure that individuals who see themselves as similar receive similar treatment. We emphasize the role of explanations in achieving fairness, focusing not only on procedural fairness but also on providing subjective explanations to convince stakeholders of their fair treatment.
Related papers
- Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - On Disentangled and Locally Fair Representations [95.6635227371479]
We study the problem of performing classification in a manner that is fair for sensitive groups, such as race and gender.
We learn a locally fair representation, such that, under the learned representation, the neighborhood of each sample is balanced in terms of the sensitive attribute.
arXiv Detail & Related papers (2022-05-05T14:26:50Z) - Gerrymandering Individual Fairness [0.0]
Individual fairness is a fairness measure that is supposed to prevent the unfair treatment of individuals on the subgroup level.
The goal of the present paper is to explore the extent to which it is possible to gerrymander individual fairness itself.
arXiv Detail & Related papers (2022-04-25T12:44:57Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - A Systematic Approach to Group Fairness in Automated Decision Making [0.0]
The goal of this paper is to provide data scientists with an accessible introduction to group fairness metrics.
We will do this by considering in which sense socio-demographic groups are compared for making a statement on fairness.
arXiv Detail & Related papers (2021-09-09T12:47:15Z) - On the Fairness of Causal Algorithmic Recourse [36.519629650529666]
We propose two new fairness criteria at the group and individual level.
We show that fairness of recourse is complementary to fairness of prediction.
We discuss whether fairness violations in the data generating process revealed by our criteria may be better addressed by societal interventions.
arXiv Detail & Related papers (2020-10-13T16:35:06Z) - Two Simple Ways to Learn Individual Fairness Metrics from Data [47.6390279192406]
Individual fairness is an intuitive definition of algorithmic fairness that addresses some of the drawbacks of group fairness.
The lack of a widely accepted fair metric for many ML tasks is the main barrier to broader adoption of individual fairness.
We show empirically that fair training with the learned metrics leads to improved fairness on three machine learning tasks susceptible to gender and racial biases.
arXiv Detail & Related papers (2020-06-19T23:47:15Z) - Principal Fairness for Human and Algorithmic Decision-Making [1.2691047660244335]
We introduce a new notion of fairness, called principal fairness, for human and algorithmic decision-making.
Unlike the existing statistical definitions of fairness, principal fairness explicitly accounts for the fact that individuals can be impacted by the decision.
arXiv Detail & Related papers (2020-05-21T00:24:54Z) - Statistical Equity: A Fairness Classification Objective [6.174903055136084]
We propose a new fairness definition motivated by the principle of equity.
We formalize our definition of fairness, and motivate it with its appropriate contexts.
We perform multiple automatic and human evaluations to show the effectiveness of our definition.
arXiv Detail & Related papers (2020-05-14T23:19:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.