Fairness Is More Than Algorithms: Racial Disparities in Time-to-Recidivism
- URL: http://arxiv.org/abs/2504.18629v1
- Date: Fri, 25 Apr 2025 18:13:37 GMT
- Title: Fairness Is More Than Algorithms: Racial Disparities in Time-to-Recidivism
- Authors: Jessy Xinyi Han, Kristjan Greenewald, Devavrat Shah,
- Abstract summary: This work introduces the notion of counterfactual racial disparity and offers a formal test using observational data to assess if differences in recidivism arise from algorithmic bias, contextual factors, or their interplay.<n>An empirical study applying this framework to the COMPAS dataset reveals that short-term recidivism patterns do not exhibit racial disparities when controlling for risk scores.<n>This suggests that factors beyond algorithmic scores, possibly structural disparities in housing, employment, and social support, may accumulate and exacerbate recidivism risks over time.
- Score: 14.402936852692408
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Racial disparities in recidivism remain a persistent challenge within the criminal justice system, increasingly exacerbated by the adoption of algorithmic risk assessment tools. Past works have primarily focused on bias induced by these tools, treating recidivism as a binary outcome. Limited attention has been given to non-algorithmic factors (including socioeconomic ones) in driving racial disparities from a systemic perspective. To that end, this work presents a multi-stage causal framework to investigate the advent and extent of disparities by considering time-to-recidivism rather than a simple binary outcome. The framework captures interactions among races, the algorithm, and contextual factors. This work introduces the notion of counterfactual racial disparity and offers a formal test using survival analysis that can be conducted with observational data to assess if differences in recidivism arise from algorithmic bias, contextual factors, or their interplay. In particular, it is formally established that if sufficient statistical evidence for differences across racial groups is observed, it would support rejecting the null hypothesis that non-algorithmic factors (including socioeconomic ones) do not affect recidivism. An empirical study applying this framework to the COMPAS dataset reveals that short-term recidivism patterns do not exhibit racial disparities when controlling for risk scores. However, statistically significant disparities emerge with longer follow-up periods, particularly for low-risk groups. This suggests that factors beyond algorithmic scores, possibly structural disparities in housing, employment, and social support, may accumulate and exacerbate recidivism risks over time. This underscores the need for policy interventions extending beyond algorithmic improvements to address broader influences on recidivism trajectories.
Related papers
- Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.<n>We show that enforcing a causal constraint often reduces the disparity between demographic groups.<n>We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Not as Simple as It Looked: Are We Concluding for Biased Arrest Practices? [0.0]
The study categorizes explanations into types of place, types of person, and a combination of both.
The analysis of violent arrest outcomes reveals approximately 40 percent of the observed variation attributed to neighborhood-level characteristics.
arXiv Detail & Related papers (2024-04-13T18:50:59Z) - Auditing Fairness under Unobserved Confounding [56.61738581796362]
We show that, surprisingly, one can still compute meaningful bounds on treatment rates for high-risk individuals.<n>We use the fact that in many real-world settings we have data from prior to any allocation to derive unbiased estimates of risk.
arXiv Detail & Related papers (2024-03-18T21:09:06Z) - Causal Equal Protection as Algorithmic Fairness [0.0]
We defend a novel principle, causal equal protection, that combines classification parity with the causal approach.<n>In the do-calculus, causal equal protection requires that individuals should not be subject to uneven risks of classification error because of their protected or socially salient characteristics.
arXiv Detail & Related papers (2024-02-19T11:30:00Z) - Measuring Equality in Machine Learning Security Defenses: A Case Study
in Speech Recognition [56.69875958980474]
This work considers approaches to defending learned systems and how security defenses result in performance inequities across different sub-populations.
We find that many methods that have been proposed can cause direct harm, like false rejection and unequal benefits from robustness training.
We present a comparison of equality between two rejection-based defenses: randomized smoothing and neural rejection, finding randomized smoothing more equitable due to the sampling mechanism for minority groups.
arXiv Detail & Related papers (2023-02-17T16:19:26Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Toward Understanding Bias Correlations for Mitigation in NLP [34.956581421295]
This work aims to provide a first systematic study toward understanding bias correlations in mitigation.
We examine bias mitigation in two common NLP tasks -- toxicity detection and word embeddings.
Our findings suggest that biases are correlated and present scenarios in which independent debiasing approaches may be insufficient.
arXiv Detail & Related papers (2022-05-24T22:48:47Z) - Treatment Effect Risk: Bounds and Inference [58.442274475425144]
Since the average treatment effect measures the change in social welfare, even if positive, there is a risk of negative effect on, say, some 10% of the population.
In this paper we consider how to nonetheless assess this important risk measure, formalized as the conditional value at risk (CVaR) of the ITE distribution.
Some bounds can also be interpreted as summarizing a complex CATE function into a single metric and are of interest independently of being a bound.
arXiv Detail & Related papers (2022-01-15T17:21:26Z) - Anatomizing Bias in Facial Analysis [86.79402670904338]
Existing facial analysis systems have been shown to yield biased results against certain demographic subgroups.
It has become imperative to ensure that these systems do not discriminate based on gender, identity, or skin tone of individuals.
This has led to research in the identification and mitigation of bias in AI systems.
arXiv Detail & Related papers (2021-12-13T09:51:13Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - Deep Interpretable Criminal Charge Prediction and Algorithmic Bias [2.3347476425292717]
This paper addresses bias issues with post-hoc explanations to provide a trustable prediction of whether a person will receive future criminal charges.
Our approach shows consistent and reliable prediction precision and recall on a real-life dataset.
arXiv Detail & Related papers (2021-06-25T07:00:13Z) - Fairness Deconstructed: A Sociotechnical View of 'Fair' Algorithms in
Criminal Justice [0.0]
Machine learning researchers have developed methods for fairness, many of which rely on equalizing empirical metrics across protected attributes.
I argue that much of the fair ML fails to account for fairness issues with underlying crime data.
Instead of building AI that reifies power imbalances, I ask whether data science can be used to understand the root causes of structural marginalization.
arXiv Detail & Related papers (2021-06-25T06:52:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.