A Causal Framework to Evaluate Racial Bias in Law Enforcement Systems
- URL: http://arxiv.org/abs/2402.14959v2
- Date: Wed, 20 Mar 2024 15:32:00 GMT
- Title: A Causal Framework to Evaluate Racial Bias in Law Enforcement Systems
- Authors: Jessy Xinyi Han, Andrew Miller, S. Craig Watkins, Christopher Winship, Fotini Christia, Devavrat Shah,
- Abstract summary: We present a multi-stage causal framework incorporating criminality.
In settings like airport security, the primary source of observed bias against a race is likely to be bias in law enforcement against innocents of that race.
In police-civilian interaction, the primary source of observed bias against a race could be bias in law enforcement against that race or bias from the general public in reporting against the other race.
- Score: 13.277413612930102
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We are interested in developing a data-driven method to evaluate race-induced biases in law enforcement systems. While the recent works have addressed this question in the context of police-civilian interactions using police stop data, they have two key limitations. First, bias can only be properly quantified if true criminality is accounted for in addition to race, but it is absent in prior works. Second, law enforcement systems are multi-stage and hence it is important to isolate the true source of bias within the "causal chain of interactions" rather than simply focusing on the end outcome; this can help guide reforms. In this work, we address these challenges by presenting a multi-stage causal framework incorporating criminality. We provide a theoretical characterization and an associated data-driven method to evaluate (a) the presence of any form of racial bias, and (b) if so, the primary source of such a bias in terms of race and criminality. Our framework identifies three canonical scenarios with distinct characteristics: in settings like (1) airport security, the primary source of observed bias against a race is likely to be bias in law enforcement against innocents of that race; (2) AI-empowered policing, the primary source of observed bias against a race is likely to be bias in law enforcement against criminals of that race; and (3) police-civilian interaction, the primary source of observed bias against a race could be bias in law enforcement against that race or bias from the general public in reporting against the other race. Through an extensive empirical study using police-civilian interaction data and 911 call data, we find an instance of such a counter-intuitive phenomenon: in New Orleans, the observed bias is against the majority race and the likely reason for it is the over-reporting (via 911 calls) of incidents involving the minority race by the general public.
Related papers
- Testing for racial bias using inconsistent perceptions of race [1.0090972954941624]
Tests for racial bias commonly assess whether two people of different races are treated differently.
A fundamental challenge is that, because two people may differ in many ways, factors besides race might explain differences in treatment.
We propose a test for bias which circumvents the difficulty of comparing two people by instead assessing whether the same person is treated differently when their race is perceived differently.
arXiv Detail & Related papers (2024-09-17T15:18:46Z) - Not as Simple as It Looked: Are We Concluding for Biased Arrest Practices? [0.0]
The study categorizes explanations into types of place, types of person, and a combination of both.
The analysis of violent arrest outcomes reveals approximately 40 percent of the observed variation attributed to neighborhood-level characteristics.
arXiv Detail & Related papers (2024-04-13T18:50:59Z) - Fair Enough: Standardizing Evaluation and Model Selection for Fairness
Research in NLP [64.45845091719002]
Modern NLP systems exhibit a range of biases, which a growing literature on model debiasing attempts to correct.
This paper seeks to clarify the current situation and plot a course for meaningful progress in fair learning.
arXiv Detail & Related papers (2023-02-11T14:54:00Z) - Unveiling the Hidden Agenda: Biases in News Reporting and Consumption [59.55900146668931]
We build a six-year dataset on the Italian vaccine debate and adopt a Bayesian latent space model to identify narrative and selection biases.
We found a nonlinear relationship between biases and engagement, with higher engagement for extreme positions.
Analysis of news consumption on Twitter reveals common audiences among news outlets with similar ideological positions.
arXiv Detail & Related papers (2023-01-14T18:58:42Z) - Toward Understanding Bias Correlations for Mitigation in NLP [34.956581421295]
This work aims to provide a first systematic study toward understanding bias correlations in mitigation.
We examine bias mitigation in two common NLP tasks -- toxicity detection and word embeddings.
Our findings suggest that biases are correlated and present scenarios in which independent debiasing approaches may be insufficient.
arXiv Detail & Related papers (2022-05-24T22:48:47Z) - The SAME score: Improved cosine based bias score for word embeddings [49.75878234192369]
We introduce SAME, a novel bias score for semantic bias in embeddings.
We show that SAME is capable of measuring semantic bias and identify potential causes for social bias in downstream tasks.
arXiv Detail & Related papers (2022-03-28T09:28:13Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - Balancing out Bias: Achieving Fairness Through Training Reweighting [58.201275105195485]
Bias in natural language processing arises from models learning characteristics of the author such as gender and race.
Existing methods for mitigating and measuring bias do not directly account for correlations between author demographics and linguistic variables.
This paper introduces a very simple but highly effective method for countering bias using instance reweighting.
arXiv Detail & Related papers (2021-09-16T23:40:28Z) - Detecting Racial Bias in Jury Selection [0.7106986689736826]
APM Reports collated historical court records to assess whether the State exhibited a racial bias in striking potential jurors.
This analysis used backward stepwise logistic regression to conclude that race was a significant factor.
We apply Optimal Feature Selection to identify the globally-optimal subset of features and affirm that there is significant evidence of racial bias in the strike decisions.
arXiv Detail & Related papers (2021-03-22T13:47:33Z) - The effect of differential victim crime reporting on predictive policing
systems [84.86615754515252]
We show how differential victim crime reporting rates can lead to outcome disparities in common crime hot spot prediction models.
Our results suggest that differential crime reporting rates can lead to a displacement of predicted hotspots from high crime but low reporting areas to high or medium crime and high reporting areas.
arXiv Detail & Related papers (2021-01-30T01:57:22Z) - The role of collider bias in understanding statistics on racially biased
policing [0.0]
Contradictory conclusions have been made about whether unarmed blacks are more likely to be shot by police than unarmed whites using the same data.
We provide a causal Bayesian network model to explain this bias, which is called collider bias or Berkson's paradox.
arXiv Detail & Related papers (2020-07-16T15:26:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.