Deep Interpretable Criminal Charge Prediction and Algorithmic Bias
- URL: http://arxiv.org/abs/2106.13456v1
- Date: Fri, 25 Jun 2021 07:00:13 GMT
- Title: Deep Interpretable Criminal Charge Prediction and Algorithmic Bias
- Authors: Abdul Rafae Khan, Jia Xu, Peter Varsanyi, Rachit Pabreja
- Abstract summary: This paper addresses bias issues with post-hoc explanations to provide a trustable prediction of whether a person will receive future criminal charges.
Our approach shows consistent and reliable prediction precision and recall on a real-life dataset.
- Score: 2.3347476425292717
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While predictive policing has become increasingly common in assisting with
decisions in the criminal justice system, the use of these results is still
controversial. Some software based on deep learning lacks accuracy (e.g., in
F-1), and many decision processes are not transparent causing doubt about
decision bias, such as perceived racial, age, and gender disparities. This
paper addresses bias issues with post-hoc explanations to provide a trustable
prediction of whether a person will receive future criminal charges given one's
previous criminal records by learning temporal behavior patterns over twenty
years. Bi-LSTM relieves the vanishing gradient problem, and attentional
mechanisms allows learning and interpretation of feature importance. Our
approach shows consistent and reliable prediction precision and recall on a
real-life dataset. Our analysis of the importance of each input feature shows
the critical causal impact on decision-making, suggesting that criminal
histories are statistically significant factors, while identifiers, such as
race, gender, and age, are not. Finally, our algorithm indicates that a suspect
tends to gradually rather than suddenly increase crime severity level over
time.
Related papers
- Crime Prediction Using Machine Learning and Deep Learning: A Systematic
Review and Future Directions [2.624902795082451]
This review paper examines over 150 articles to explore the various machine learning and deep learning algorithms applied to predict crime.
The study provides access to the datasets used for crime prediction by researchers.
The paper highlights potential gaps and future directions that can enhance the accuracy of crime prediction.
arXiv Detail & Related papers (2023-03-28T21:07:42Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Spatial-Temporal Hypergraph Self-Supervised Learning for Crime
Prediction [60.508960752148454]
This work proposes a Spatial-Temporal Hypergraph Self-Supervised Learning framework to tackle the label scarcity issue in crime prediction.
We propose the cross-region hypergraph structure learning to encode region-wise crime dependency under the entire urban space.
We also design the dual-stage self-supervised learning paradigm, to not only jointly capture local- and global-level spatial-temporal crime patterns, but also supplement the sparse crime representation by augmenting region self-discrimination.
arXiv Detail & Related papers (2022-04-18T23:46:01Z) - Anatomizing Bias in Facial Analysis [86.79402670904338]
Existing facial analysis systems have been shown to yield biased results against certain demographic subgroups.
It has become imperative to ensure that these systems do not discriminate based on gender, identity, or skin tone of individuals.
This has led to research in the identification and mitigation of bias in AI systems.
arXiv Detail & Related papers (2021-12-13T09:51:13Z) - Analyzing a Carceral Algorithm used by the Pennsylvania Department of
Corrections [0.0]
This paper is focused on the Pennsylvania Additive Classification Tool (PACT) used to classify prisoners' custody levels while they are incarcerated.
The algorithm in this case determines the likelihood a person would endure additional disciplinary actions, can complete required programming, and gain experiences that, among other things, are distilled into variables feeding into the parole algorithm.
arXiv Detail & Related papers (2021-12-06T18:47:31Z) - Uncertainty in Criminal Justice Algorithms: simulation studies of the
Pennsylvania Additive Classification Tool [0.0]
We study the Pennsylvania Additive Classification Tool (PACT) that assigns custody levels to incarcerated individuals.
We analyze the PACT in ways that criminal justice algorithms are often analyzed.
We propose and carry out some new ways to study such algorithms.
arXiv Detail & Related papers (2021-12-01T06:27:24Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - The Impact of Algorithmic Risk Assessments on Human Predictions and its
Analysis via Crowdsourcing Studies [79.66833203975729]
We conduct a vignette study in which laypersons are tasked with predicting future re-arrests.
Our key findings are as follows: Participants often predict that an offender will be rearrested even when they deem the likelihood of re-arrest to be well below 50%.
Judicial decisions, unlike participants' predictions, depend in part on factors that are to the likelihood of re-arrest.
arXiv Detail & Related papers (2021-09-03T11:09:10Z) - Covert Embodied Choice: Decision-Making and the Limits of Privacy Under
Biometric Surveillance [6.92628425870087]
We present results from a virtual reality task in which gaze, movement, and other physiological signals are tracked.
We find that while participants use a variety of strategies, data collected remains highly predictive of choice (80% accuracy).
A significant portion of participants became more predictable despite efforts to obfuscate, possibly indicating mistaken priors about the dynamics of algorithmic prediction.
arXiv Detail & Related papers (2021-01-04T04:45:22Z) - Fairness Evaluation in Presence of Biased Noisy Labels [84.12514975093826]
We propose a sensitivity analysis framework for assessing how assumptions on the noise across groups affect the predictive bias properties of the risk assessment model.
Our experimental results on two real world criminal justice data sets demonstrate how even small biases in the observed labels may call into question the conclusions of an analysis based on the noisy outcome.
arXiv Detail & Related papers (2020-03-30T20:47:00Z) - Crime Prediction Using Spatio-Temporal Data [8.50468505606714]
Supervised learning technique is used to predict crimes with better accuracy.
The proposed system is feed with a criminal-activity data set of twelve years of San Francisco city.
arXiv Detail & Related papers (2020-03-11T16:19:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.