Causal Equal Protection as Algorithmic Fairness
- URL: http://arxiv.org/abs/2402.12062v4
- Date: Wed, 05 Feb 2025 11:33:07 GMT
- Title: Causal Equal Protection as Algorithmic Fairness
- Authors: Marcello Di Bello, Nicolò Cangiotti, Michele Loi,
- Abstract summary: We defend a novel principle, causal equal protection, that combines classification parity with the causal approach.<n>In the do-calculus, causal equal protection requires that individuals should not be subject to uneven risks of classification error because of their protected or socially salient characteristics.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: By combining the philosophical literature on statistical evidence and the interdisciplinary literature on algorithmic fairness, we revisit recent objections against classification parity in light of causal analyses of algorithmic fairness and the distinction between predictive and diagnostic evidence. We focus on trial proceedings as a black-box classification algorithm in which defendants are sorted into two groups by convicting or acquitting them. We defend a novel principle, causal equal protection, that combines classification parity with the causal approach. In the do-calculus, causal equal protection requires that individuals should not be subject to uneven risks of classification error because of their protected or socially salient characteristics. The explicit use of protected characteristics, however, may be required if it equalizes these risks.
Related papers
- Fairness Is More Than Algorithms: Racial Disparities in Time-to-Recidivism [14.402936852692408]
This work introduces the notion of counterfactual racial disparity and offers a formal test using observational data to assess if differences in recidivism arise from algorithmic bias, contextual factors, or their interplay.
An empirical study applying this framework to the COMPAS dataset reveals that short-term recidivism patterns do not exhibit racial disparities when controlling for risk scores.
This suggests that factors beyond algorithmic scores, possibly structural disparities in housing, employment, and social support, may accumulate and exacerbate recidivism risks over time.
arXiv Detail & Related papers (2025-04-25T18:13:37Z) - Uncertainty-Aware Fairness-Adaptive Classification Trees [0.0]
This paper introduces a new classification tree algorithm using a novel splitting criterion that incorporates fairness adjustments into the tree-building process.
We show that our method effectively reduces discriminatory predictions compared to traditional classification trees, without significant loss in overall accuracy.
arXiv Detail & Related papers (2024-10-08T08:42:12Z) - Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - When Fair Classification Meets Noisy Protected Attributes [8.362098382773265]
This study is the first head-to-head study of fair classification algorithms to compare attribute-reliant, noise-tolerant and attribute-blind algorithms.
Our study reveals that attribute-blind and noise-tolerant fair classifiers can potentially achieve similar level of performance as attribute-reliant algorithms.
arXiv Detail & Related papers (2023-07-06T21:38:18Z) - Counterfactual Situation Testing: Uncovering Discrimination under
Fairness given the Difference [26.695316585522527]
We present counterfactual situation testing (CST), a causal data mining framework for detecting discrimination in classifiers.
CST aims to answer in an actionable and meaningful way the question "what would have been the model outcome had the individual, or complainant, been of a different protected status?"
arXiv Detail & Related papers (2023-02-23T11:41:21Z) - Measuring Equality in Machine Learning Security Defenses: A Case Study
in Speech Recognition [56.69875958980474]
This work considers approaches to defending learned systems and how security defenses result in performance inequities across different sub-populations.
We find that many methods that have been proposed can cause direct harm, like false rejection and unequal benefits from robustness training.
We present a comparison of equality between two rejection-based defenses: randomized smoothing and neural rejection, finding randomized smoothing more equitable due to the sampling mechanism for minority groups.
arXiv Detail & Related papers (2023-02-17T16:19:26Z) - Towards Fair Classification against Poisoning Attacks [52.57443558122475]
We study the poisoning scenario where the attacker can insert a small fraction of samples into training data.
We propose a general and theoretically guaranteed framework which accommodates traditional defense methods to fair classification against poisoning attacks.
arXiv Detail & Related papers (2022-10-18T00:49:58Z) - Fairness and robustness in anti-causal prediction [73.693135253335]
Robustness to distribution shift and fairness have independently emerged as two important desiderata required of machine learning models.
While these two desiderata seem related, the connection between them is often unclear in practice.
By taking this perspective, we draw explicit connections between a common fairness criterion - separation - and a common notion of robustness.
arXiv Detail & Related papers (2022-09-20T02:41:17Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Fairness Deconstructed: A Sociotechnical View of 'Fair' Algorithms in
Criminal Justice [0.0]
Machine learning researchers have developed methods for fairness, many of which rely on equalizing empirical metrics across protected attributes.
I argue that much of the fair ML fails to account for fairness issues with underlying crime data.
Instead of building AI that reifies power imbalances, I ask whether data science can be used to understand the root causes of structural marginalization.
arXiv Detail & Related papers (2021-06-25T06:52:49Z) - Affirmative Algorithms: The Legal Grounds for Fairness as Awareness [0.0]
We discuss how such approaches will likely be deemed "algorithmic affirmative action"
We argue that the government-contracting cases offer an alternative grounding for algorithmic fairness.
We call for more research at the intersection of algorithmic fairness and causal inference to ensure that bias mitigation is tailored to specific causes and mechanisms of bias.
arXiv Detail & Related papers (2020-12-18T22:53:20Z) - Towards Robust Fine-grained Recognition by Maximal Separation of
Discriminative Features [72.72840552588134]
We identify the proximity of the latent representations of different classes in fine-grained recognition networks as a key factor to the success of adversarial attacks.
We introduce an attention-based regularization mechanism that maximally separates the discriminative latent features of different classes.
arXiv Detail & Related papers (2020-06-10T18:34:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.