Gerrymandering Individual Fairness
- URL: http://arxiv.org/abs/2204.11615v1
- Date: Mon, 25 Apr 2022 12:44:57 GMT
- Title: Gerrymandering Individual Fairness
- Authors: Tim R\"az
- Abstract summary: Individual fairness is a fairness measure that is supposed to prevent the unfair treatment of individuals on the subgroup level.
The goal of the present paper is to explore the extent to which it is possible to gerrymander individual fairness itself.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Individual fairness, proposed by Dwork et al., is a fairness measure that is
supposed to prevent the unfair treatment of individuals on the subgroup level,
and to overcome the problem that group fairness measures are susceptible to
manipulation, or gerrymandering. The goal of the present paper is to explore
the extent to which it is possible to gerrymander individual fairness itself.
It will be proved that gerrymandering individual fairness in the context of
predicting scores is possible. It will also be argued that individual fairness
provides a very weak notion of fairness for some choices of feature space and
metric. Finally, it will be discussed how the general idea of individual
fairness may be preserved by formulating a notion of fairness that allows us to
overcome some of the problems with individual fairness identified here and
elsewhere.
Related papers
- Harm Ratio: A Novel and Versatile Fairness Criterion [27.18270261374462]
Envy-freeness has become the cornerstone of fair division research.
We propose a novel fairness criterion, individual harm ratio, inspired by envy-freeness.
Our criterion is powerful enough to differentiate between prominent decision-making algorithms.
arXiv Detail & Related papers (2024-10-03T20:36:05Z) - Subjective fairness in algorithmic decision-support [0.0]
The treatment of fairness in decision-making literature usually involves quantifying fairness using objective measures.
This work takes a critical stance to highlight the limitations of these approaches using sociological insights.
We redefine fairness as a subjective property moving from a top-down to a bottom-up approach.
arXiv Detail & Related papers (2024-06-28T14:37:39Z) - Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Navigating Fairness Measures and Trade-Offs [0.0]
I show that by using Rawls' notion of justice as fairness, we can create a basis for navigating fairness measures and the accuracy trade-off.
This also helps to close part of the gap between philosophical accounts of distributive justice and the fairness literature.
arXiv Detail & Related papers (2023-07-17T13:45:47Z) - Equalised Odds is not Equal Individual Odds: Post-processing for Group and Individual Fairness [13.894631477590362]
Group fairness is achieved by equalising prediction distributions between protected sub-populations.
individual fairness requires treating similar individuals alike.
This procedure may provide two similar individuals from the same protected group with classification odds that are disparately different.
arXiv Detail & Related papers (2023-04-19T16:02:00Z) - Proportional Fairness in Obnoxious Facility Location [70.64736616610202]
We propose a hierarchy of distance-based proportional fairness concepts for the problem.
We consider deterministic and randomized mechanisms, and compute tight bounds on the price of proportional fairness.
We prove existence results for two extensions to our model.
arXiv Detail & Related papers (2023-01-11T07:30:35Z) - Joint Multisided Exposure Fairness for Recommendation [76.75990595228666]
This paper formalizes a family of exposure fairness metrics that model the problem jointly from the perspective of both the consumers and producers.
Specifically, we consider group attributes for both types of stakeholders to identify and mitigate fairness concerns that go beyond individual users and items towards more systemic biases in recommendation.
arXiv Detail & Related papers (2022-04-29T19:13:23Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Algorithmic Decision Making with Conditional Fairness [48.76267073341723]
We define conditional fairness as a more sound fairness metric by conditioning on the fairness variables.
We propose a Derivable Conditional Fairness Regularizer (DCFR) to track the trade-off between precision and fairness of algorithmic decision making.
arXiv Detail & Related papers (2020-06-18T12:56:28Z) - Principal Fairness for Human and Algorithmic Decision-Making [1.2691047660244335]
We introduce a new notion of fairness, called principal fairness, for human and algorithmic decision-making.
Unlike the existing statistical definitions of fairness, principal fairness explicitly accounts for the fact that individuals can be impacted by the decision.
arXiv Detail & Related papers (2020-05-21T00:24:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.