Improving Fairness in Criminal Justice Algorithmic Risk Assessments
Using Conformal Prediction Sets
- URL: http://arxiv.org/abs/2008.11664v3
- Date: Fri, 21 May 2021 17:35:18 GMT
- Title: Improving Fairness in Criminal Justice Algorithmic Risk Assessments
Using Conformal Prediction Sets
- Authors: Richard A. Berk and Arun Kumar Kuchibhotla
- Abstract summary: We adopt a framework from conformal prediction sets to remove unfairness from risk algorithms.
From a sample of 300,000 offenders at their arraignments, we construct a confusion table and its derived measures of fairness.
We see our work as a demonstration of concept for application in a wide variety of criminal justice decisions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Risk assessment algorithms have been correctly criticized for potential
unfairness, and there is an active cottage industry trying to make repairs. In
this paper, we adopt a framework from conformal prediction sets to remove
unfairness from risk algorithms themselves and the covariates used for
forecasting. From a sample of 300,000 offenders at their arraignments, we
construct a confusion table and its derived measures of fairness that are
effectively free any meaningful differences between Black and White offenders.
We also produce fair forecasts for individual offenders coupled with valid
probability guarantees that the forecasted outcome is the true outcome. We see
our work as a demonstration of concept for application in a wide variety of
criminal justice decisions. The procedures provided can be routinely
implemented in jurisdictions with the usual criminal justice datasets used by
administrators. The requisite procedures can be found in the scripting software
R. However, whether stakeholders will accept our approach as a means to achieve
risk assessment fairness is unknown. There also are legal issues that would
need to be resolved although we offer a Pareto improvement.
Related papers
- Calibrated Probabilistic Forecasts for Arbitrary Sequences [58.54729945445505]
Real-world data streams can change unpredictably due to distribution shifts, feedback loops and adversarial actors.
We present a forecasting framework ensuring valid uncertainty estimates regardless of how data evolves.
arXiv Detail & Related papers (2024-09-27T21:46:42Z) - Randomization Techniques to Mitigate the Risk of Copyright Infringement [48.75580082851766]
We investigate potential randomization approaches that can complement current practices for copyright protection.
This is motivated by the inherent ambiguity of the rules that determine substantial similarity in copyright precedents.
Similar randomized approaches, such as differential privacy, have been successful in mitigating privacy risks.
arXiv Detail & Related papers (2024-08-21T20:55:00Z) - Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - A Distributionally Robust Optimisation Approach to Fair Credit Scoring [2.8851756275902467]
Credit scoring has been catalogued by the European Commission and the Executive Office of the US President as a high-risk classification task.
To address this concern, recent credit scoring research has considered a range of fairness-enhancing techniques.
arXiv Detail & Related papers (2024-02-02T11:43:59Z) - Equal Opportunity of Coverage in Fair Regression [50.76908018786335]
We study fair machine learning (ML) under predictive uncertainty to enable reliable and trustworthy decision-making.
We propose Equal Opportunity of Coverage (EOC) that aims to achieve two properties: (1) coverage rates for different groups with similar outcomes are close, and (2) the coverage rate for the entire population remains at a predetermined level.
arXiv Detail & Related papers (2023-11-03T21:19:59Z) - Compatibility of Fairness Metrics with EU Non-Discrimination Laws:
Demographic Parity & Conditional Demographic Disparity [3.5607241839298878]
Empirical evidence suggests that algorithmic decisions driven by Machine Learning (ML) techniques threaten to discriminate against legally protected groups or create new sources of unfairness.
This work aims at assessing up to what point we can assure legal fairness through fairness metrics and under fairness constraints.
Our experiments and analysis suggest that AI-assisted decision-making can be fair from a legal perspective depending on the case at hand and the legal justification.
arXiv Detail & Related papers (2023-06-14T09:38:05Z) - Arbitrariness and Social Prediction: The Confounding Role of Variance in
Fair Classification [31.392067805022414]
Variance in predictions across different trained models is a significant, under-explored source of error in fair binary classification.
In practice, the variance on some data examples is so large that decisions can be effectively arbitrary.
We develop an ensembling algorithm that abstains from classification when a prediction would be arbitrary.
arXiv Detail & Related papers (2023-01-27T06:52:04Z) - Algorithmic Fairness and Vertical Equity: Income Fairness with IRS Tax
Audit Models [73.24381010980606]
This study examines issues of algorithmic fairness in the context of systems that inform tax audit selection by the IRS.
We show how the use of more flexible machine learning methods for selecting audits may affect vertical equity.
Our results have implications for the design of algorithmic tools across the public sector.
arXiv Detail & Related papers (2022-06-20T16:27:06Z) - The Fairness of Credit Scoring Models [0.0]
In credit markets, screening algorithms aim to discriminate between good-type and bad-type borrowers.
This can be unintentional and originate from the training dataset or from the model itself.
We show how to formally test the algorithmic fairness of scoring models and how to identify the variables responsible for any lack of fairness.
arXiv Detail & Related papers (2022-05-20T14:20:40Z) - Equality before the Law: Legal Judgment Consistency Analysis for
Fairness [55.91612739713396]
In this paper, we propose an evaluation metric for judgment inconsistency, Legal Inconsistency Coefficient (LInCo)
We simulate judges from different groups with legal judgment prediction (LJP) models and measure the judicial inconsistency with the disagreement of the judgment results given by LJP models trained on different groups.
We employ LInCo to explore the inconsistency in real cases and come to the following observations: (1) Both regional and gender inconsistency exist in the legal system, but gender inconsistency is much less than regional inconsistency.
arXiv Detail & Related papers (2021-03-25T14:28:00Z) - A Risk Assessment of a Pretrial Risk Assessment Tool: Tussles,
Mitigation Strategies, and Inherent Limits [0.0]
We perform a risk assessment of the Public Safety Assessment (PSA), a software used in San Francisco and other jurisdictions to assist judges in deciding whether defendants need to be detained before their trial.
We articulate benefits and limitations of the PSA solution, as well as suggest mitigation strategies.
We then draft the Handoff Tree, a novel algorithmic approach to pretrial justice that accommodates some of the inherent limitations of risk assessment tools by design.
arXiv Detail & Related papers (2020-05-14T23:56:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.