Group Fairness Is Not Derivable From Justice: a Mathematical Proof
- URL: http://arxiv.org/abs/2202.03880v1
- Date: Tue, 8 Feb 2022 14:10:47 GMT
- Title: Group Fairness Is Not Derivable From Justice: a Mathematical Proof
- Authors: Nicol\`o Cangiotti and Michele Loi
- Abstract summary: 'Group fairness' involves ensuring the same chances of acquittal or convictions to all innocent defendants independently of their morally arbitrary features.
We show mathematically that only a perfect procedure (involving no mistake), a non-deterministic one, or a degenerate one can guarantee group fairness.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We argue that an imperfect criminal law procedure cannot be group-fair, if
'group fairness' involves ensuring the same chances of acquittal or convictions
to all innocent defendants independently of their morally arbitrary features.
We show mathematically that only a perfect procedure (involving no mistake), a
non-deterministic one, or a degenerate one (everyone or no one is convicted)
can guarantee group fairness, in the general case. Following a recent proposal,
we adopt a definition of group fairness, requiring that individuals who are
equal in merit ought to have the same statistical chances of obtaining
advantages and disadvantages, in a way that is statistically independent of any
of their feature that does not count as merit. We explain by mathematical
argument that the only imperfect procedures offering an a-priori guarantee of
fairness in relation to all non-merit trait are lotteries or degenerate ones
(i.e., everyone or no one is convicted). To provide a more intuitive point of
view, we exploit an adjustment of the well-known ROC space, in order to
represent all possible procedures in our model by a schematic diagram. The
argument seems to be equally valid for all human procedures, provided they are
imperfect. This clearly includes algorithmic decision-making, including
decisions based on statistical predictions, since in practice all statistical
models are error prone.
Related papers
- What's Distributive Justice Got to Do with It? Rethinking Algorithmic Fairness from the Perspective of Approximate Justice [1.8434042562191815]
We argue that in the context of imperfect decision-making systems, we should not only care about what the ideal distribution of benefits/harms among individuals would look like.
This requires us to rethink the way in which we, as algorithmic fairness researchers, view distributive justice and use fairness criteria.
arXiv Detail & Related papers (2024-07-17T11:13:23Z) - Causal Context Connects Counterfactual Fairness to Robust Prediction and
Group Fairness [15.83823345486604]
We motivatefactual fairness by showing that there is not a fundamental trade-off between fairness and accuracy.
Counterfactual fairness can sometimes be tested by measuring relatively simple group fairness metrics.
arXiv Detail & Related papers (2023-10-30T16:07:57Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Equalised Odds is not Equal Individual Odds: Post-processing for Group and Individual Fairness [13.894631477590362]
Group fairness is achieved by equalising prediction distributions between protected sub-populations.
individual fairness requires treating similar individuals alike.
This procedure may provide two similar individuals from the same protected group with classification odds that are disparately different.
arXiv Detail & Related papers (2023-04-19T16:02:00Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Reconciling Individual Probability Forecasts [78.0074061846588]
We show that two parties who agree on the data cannot disagree on how to model individual probabilities.
We conclude that although individual probabilities are unknowable, they are contestable via a computationally and data efficient process.
arXiv Detail & Related papers (2022-09-04T20:20:35Z) - Pushing the limits of fairness impossibility: Who's the fairest of them
all? [6.396013144017572]
We present a framework that pushes the limits of the impossibility theorem in order to satisfy all three metrics to the best extent possible.
We show experiments demonstrating that our post-processor can improve fairness across the different definitions simultaneously with minimal model performance reduction.
arXiv Detail & Related papers (2022-08-24T22:04:51Z) - CertiFair: A Framework for Certified Global Fairness of Neural Networks [1.4620086904601473]
Individual Fairness suggests that similar individuals with respect to a certain task are to be treated similarly by a Neural Network (NN) model.
We construct a verifier which checks whether the fairness property holds for a given NN in a classification task.
We then provide provable bounds on the fairness of the resulting NN.
arXiv Detail & Related papers (2022-05-20T02:08:47Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Beyond Individual and Group Fairness [90.4666341812857]
We present a new data-driven model of fairness that is guided by the unfairness complaints received by the system.
Our model supports multiple fairness criteria and takes into account their potential incompatibilities.
arXiv Detail & Related papers (2020-08-21T14:14:44Z) - Algorithmic Decision Making with Conditional Fairness [48.76267073341723]
We define conditional fairness as a more sound fairness metric by conditioning on the fairness variables.
We propose a Derivable Conditional Fairness Regularizer (DCFR) to track the trade-off between precision and fairness of algorithmic decision making.
arXiv Detail & Related papers (2020-06-18T12:56:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.