Evidence of What, for Whom? The Socially Contested Role of Algorithmic Bias in a Predictive Policing Tool
- URL: http://arxiv.org/abs/2405.07715v1
- Date: Mon, 13 May 2024 13:03:33 GMT
- Title: Evidence of What, for Whom? The Socially Contested Role of Algorithmic Bias in a Predictive Policing Tool
- Authors: Marta Ziosi, Dasha Pruss,
- Abstract summary: We show that stakeholders from different groups articulate diverse problem diagnoses of the tool's algorithmic bias.
We find that stakeholders use evidence of algorithmic bias to reform the policies around police patrol allocation.
We identify the implicit assumptions and scope of these varied uses of algorithmic bias as evidence.
- Score: 0.9821874476902969
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a critical, qualitative study of the social role of algorithmic bias in the context of the Chicago crime prediction algorithm, a predictive policing tool that forecasts when and where in the city crime is most likely to occur. Through interviews with 18 Chicago-area community organizations, academic researchers, and public sector actors, we show that stakeholders from different groups articulate diverse problem diagnoses of the tool's algorithmic bias, strategically using it as evidence to advance criminal justice interventions that align with stakeholders' positionality and political ends. Drawing inspiration from Catherine D'Ignazio's taxonomy of "refusing and using" data, we find that stakeholders use evidence of algorithmic bias to reform the policies around police patrol allocation; reject algorithm-based policing interventions; reframe crime as a structural rather than interpersonal problem; reveal data on authority figures in an effort to subvert their power; repair and heal families and communities; and, in the case of more powerful actors, to reaffirm their own authority or existing power structures. We identify the implicit assumptions and scope of these varied uses of algorithmic bias as evidence, showing that they require different (and sometimes conflicting) values about policing and AI. This divergence reflects long-standing tensions in the criminal justice reform landscape between the values of liberation and healing often centered by system-impacted communities and the values of surveillance and deterrence often instantiated in data-driven reform measures. We advocate for centering the interests and experiential knowledge of communities impacted by incarceration to ensure that evidence of algorithmic bias can serve as a device to challenge the status quo.
Related papers
- Fairness in Algorithmic Recourse Through the Lens of Substantive
Equality of Opportunity [15.78130132380848]
Algorithmic recourse has gained attention as a means of giving persons agency in their interactions with AI systems.
Recent work has shown that recourse itself may be unfair due to differences in the initial circumstances of individuals.
Time is a critical element in recourse because the longer it takes an individual to act, the more the setting may change.
arXiv Detail & Related papers (2024-01-29T11:55:45Z) - The Impact of Differential Feature Under-reporting on Algorithmic Fairness [86.275300739926]
We present an analytically tractable model of differential feature under-reporting.
We then use to characterize the impact of this kind of data bias on algorithmic fairness.
Our results show that, in real world data settings, under-reporting typically leads to increasing disparities.
arXiv Detail & Related papers (2024-01-16T19:16:22Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Demographic-Reliant Algorithmic Fairness: Characterizing the Risks of
Demographic Data Collection in the Pursuit of Fairness [0.0]
We consider calls to collect more data on demographics to enable algorithmic fairness.
We show how these techniques largely ignore broader questions of data governance and systemic oppression.
arXiv Detail & Related papers (2022-04-18T04:50:09Z) - A relationship and not a thing: A relational approach to algorithmic
accountability and assessment documentation [3.4438724671481755]
We argue that developers largely have a monopoly on information about how their systems actually work.
We argue that robust accountability regimes must establish opportunities for publics to cohere around shared experiences and interests.
arXiv Detail & Related papers (2022-03-02T23:22:03Z) - Algorithmic Fairness Datasets: the Story so Far [68.45921483094705]
Data-driven algorithms are studied in diverse domains to support critical decisions, directly impacting people's well-being.
A growing community of researchers has been investigating the equity of existing algorithms and proposing novel ones, advancing the understanding of risks and opportunities of automated decision-making for historically disadvantaged populations.
Progress in fair Machine Learning hinges on data, which can be appropriately used only if adequately documented.
Unfortunately, the algorithmic fairness community suffers from a collective data documentation debt caused by a lack of information on specific resources (opacity) and scatteredness of available information (sparsity)
arXiv Detail & Related papers (2022-02-03T17:25:46Z) - Anatomizing Bias in Facial Analysis [86.79402670904338]
Existing facial analysis systems have been shown to yield biased results against certain demographic subgroups.
It has become imperative to ensure that these systems do not discriminate based on gender, identity, or skin tone of individuals.
This has led to research in the identification and mitigation of bias in AI systems.
arXiv Detail & Related papers (2021-12-13T09:51:13Z) - Analyzing a Carceral Algorithm used by the Pennsylvania Department of
Corrections [0.0]
This paper is focused on the Pennsylvania Additive Classification Tool (PACT) used to classify prisoners' custody levels while they are incarcerated.
The algorithm in this case determines the likelihood a person would endure additional disciplinary actions, can complete required programming, and gain experiences that, among other things, are distilled into variables feeding into the parole algorithm.
arXiv Detail & Related papers (2021-12-06T18:47:31Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - The effect of differential victim crime reporting on predictive policing
systems [84.86615754515252]
We show how differential victim crime reporting rates can lead to outcome disparities in common crime hot spot prediction models.
Our results suggest that differential crime reporting rates can lead to a displacement of predicted hotspots from high crime but low reporting areas to high or medium crime and high reporting areas.
arXiv Detail & Related papers (2021-01-30T01:57:22Z) - Pursuing Open-Source Development of Predictive Algorithms: The Case of
Criminal Sentencing Algorithms [0.0]
We argue that open-source algorithm development should be the standard in highly consequential contexts.
We suggest these issues are exacerbated by the proprietary and expensive nature of virtually all widely used criminal sentencing algorithms.
arXiv Detail & Related papers (2020-11-12T14:53:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.