Is calibration a fairness requirement? An argument from the point of
view of moral philosophy and decision theory
- URL: http://arxiv.org/abs/2205.05512v3
- Date: Sun, 19 Jun 2022 16:06:31 GMT
- Title: Is calibration a fairness requirement? An argument from the point of
view of moral philosophy and decision theory
- Authors: Michele Loi and Christoph Heitz
- Abstract summary: We argue that a violation of group calibration may be unfair in some cases, but not unfair in others.
This is in line with claims already advanced in the literature, that algorithmic fairness should be defined in a way that is sensitive to context.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we provide a moral analysis of two criteria of statistical
fairness debated in the machine learning literature: 1) calibration between
groups and 2) equality of false positive and false negative rates between
groups. In our paper, we focus on moral arguments in support of either measure.
The conflict between group calibration vs. false positive and false negative
rate equality is one of the core issues in the debate about group fairness
definitions among practitioners. For any thorough moral analysis, the meaning
of the term fairness has to be made explicit and defined properly. For our
paper, we equate fairness with (non-)discrimination, which is a legitimate
understanding in the discussion about group fairness. More specifically, we
equate it with prima facie wrongful discrimination in the sense this is used in
Prof. Lippert-Rasmussen's treatment of this definition. In this paper, we argue
that a violation of group calibration may be unfair in some cases, but not
unfair in others. This is in line with claims already advanced in the
literature, that algorithmic fairness should be defined in a way that is
sensitive to context. The most important practical implication is that
arguments based on examples in which fairness requires between-group
calibration, or equality in the false-positive/false-negative rates, do no
generalize. For it may be that group calibration is a fairness requirement in
one case, but not in another.
Related papers
- What's Distributive Justice Got to Do with It? Rethinking Algorithmic Fairness from the Perspective of Approximate Justice [1.8434042562191815]
We argue that in the context of imperfect decision-making systems, we should not only care about what the ideal distribution of benefits/harms among individuals would look like.
This requires us to rethink the way in which we, as algorithmic fairness researchers, view distributive justice and use fairness criteria.
arXiv Detail & Related papers (2024-07-17T11:13:23Z) - Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Causal Context Connects Counterfactual Fairness to Robust Prediction and
Group Fairness [15.83823345486604]
We motivatefactual fairness by showing that there is not a fundamental trade-off between fairness and accuracy.
Counterfactual fairness can sometimes be tested by measuring relatively simple group fairness metrics.
arXiv Detail & Related papers (2023-10-30T16:07:57Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Fair Enough: Standardizing Evaluation and Model Selection for Fairness
Research in NLP [64.45845091719002]
Modern NLP systems exhibit a range of biases, which a growing literature on model debiasing attempts to correct.
This paper seeks to clarify the current situation and plot a course for meaningful progress in fair learning.
arXiv Detail & Related papers (2023-02-11T14:54:00Z) - The Unfairness of Fair Machine Learning: Levelling down and strict
egalitarianism by default [10.281644134255576]
This paper examines the causes and prevalence of levelling down across fairML.
We propose a first step towards substantive equality in fairML by design through enforcement of minimum acceptable harm thresholds.
arXiv Detail & Related papers (2023-02-05T15:22:43Z) - On Disentangled and Locally Fair Representations [95.6635227371479]
We study the problem of performing classification in a manner that is fair for sensitive groups, such as race and gender.
We learn a locally fair representation, such that, under the learned representation, the neighborhood of each sample is balanced in terms of the sensitive attribute.
arXiv Detail & Related papers (2022-05-05T14:26:50Z) - Fairness for Image Generation with Uncertain Sensitive Attributes [97.81354305427871]
This work tackles the issue of fairness in the context of generative procedures, such as image super-resolution.
While traditional group fairness definitions are typically defined with respect to specified protected groups, we emphasize that there are no ground truth identities.
We show that the natural extension of demographic parity is strongly dependent on the grouping, and emphimpossible to achieve obliviously.
arXiv Detail & Related papers (2021-06-23T06:17:17Z) - Characterizing Intersectional Group Fairness with Worst-Case Comparisons [0.0]
We discuss why fairness metrics need to be looked at under the lens of intersectionality.
We suggest a simple worst case comparison method to expand the definitions of existing group fairness metrics.
We conclude with the social, legal and political framework to handle intersectional fairness in the modern context.
arXiv Detail & Related papers (2021-01-05T17:44:33Z) - Two Simple Ways to Learn Individual Fairness Metrics from Data [47.6390279192406]
Individual fairness is an intuitive definition of algorithmic fairness that addresses some of the drawbacks of group fairness.
The lack of a widely accepted fair metric for many ML tasks is the main barrier to broader adoption of individual fairness.
We show empirically that fair training with the learned metrics leads to improved fairness on three machine learning tasks susceptible to gender and racial biases.
arXiv Detail & Related papers (2020-06-19T23:47:15Z) - Statistical Equity: A Fairness Classification Objective [6.174903055136084]
We propose a new fairness definition motivated by the principle of equity.
We formalize our definition of fairness, and motivate it with its appropriate contexts.
We perform multiple automatic and human evaluations to show the effectiveness of our definition.
arXiv Detail & Related papers (2020-05-14T23:19:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.