Judging the algorithm: A case study on the risk assessment tool for
gender-based violence implemented in the Basque country
- URL: http://arxiv.org/abs/2203.03723v2
- Date: Wed, 20 Apr 2022 13:00:42 GMT
- Title: Judging the algorithm: A case study on the risk assessment tool for
gender-based violence implemented in the Basque country
- Authors: Ana Valdivia, Cari Hyde-Vaamonde and Juli\'an Garc\'ia-Marcos
- Abstract summary: Since 2010, the output of a risk assessment tool that predicts how likely an individual is to commit severe violence against their partner has been integrated within the Basque country courtrooms.
The EPV-R, the tool developed to assist police officers during the assessment of gender-based violence cases, was also incorporated to assist the decision-making of judges.
With insufficient training, judges are exposed to an algorithmic output that influences the human decision of adopting measures in cases of gender-based violence.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Since 2010, the output of a risk assessment tool that predicts how likely an
individual is to commit severe violence against their partner has been
integrated within the Basque country courtrooms. The EPV-R, the tool developed
to assist police officers during the assessment of gender-based violence cases,
was also incorporated to assist the decision-making of judges. With
insufficient training, judges are exposed to an algorithmic output that
influences the human decision of adopting measures in cases of gender-based
violence.
In this paper, we examine the risks, harms and limits of algorithmic
governance within the context of gender-based violence. Through the lens of an
Spanish judge exposed to this tool, we analyse how the EPV-R is impacting on
the justice system. Moving beyond the risks of unfair and biased algorithmic
outputs, we examine legal, social and technical pitfalls such as opaque
implementation, efficiency's paradox and feedback loop, that could led to
unintended consequences on women who suffer gender-based violence. Our
interdisciplinary framework highlights the importance of understanding the
impact and influence of risk assessment tools within judicial decision-making
and increase awareness about its implementation in this context.
Related papers
- AILuminate: Introducing v1.0 of the AI Risk and Reliability Benchmark from MLCommons [62.374792825813394]
This paper introduces AILuminate v1.0, the first comprehensive industry-standard benchmark for assessing AI-product risk and reliability.
The benchmark evaluates an AI system's resistance to prompts designed to elicit dangerous, illegal, or undesirable behavior in 12 hazard categories.
arXiv Detail & Related papers (2025-02-19T05:58:52Z) - An evidence-based methodology for human rights impact assessment (HRIA) in the development of AI data-intensive systems [49.1574468325115]
We show that human rights already underpin the decisions in the field of data use.
This work presents a methodology and a model for a Human Rights Impact Assessment (HRIA)
The proposed methodology is tested in concrete case-studies to prove its feasibility and effectiveness.
arXiv Detail & Related papers (2024-07-30T16:27:52Z) - Identifying Risk Patterns in Brazilian Police Reports Preceding
Femicides: A Long Short Term Memory (LSTM) Based Analysis [0.0]
Femicide refers to the killing of a female victim, often perpetrated by an intimate partner or family member, and is also associated with gender-based violence.
In this study, we employed the Long Short Term Memory (LSTM) technique to identify patterns of behavior in Brazilian police reports preceding femicides.
Our first objective was to classify the content of these reports as indicating either a lower or higher risk of the victim being murdered, achieving an accuracy of 66%.
In the second approach, we developed a model to predict the next action a victim might experience within a sequence of patterned events.
arXiv Detail & Related papers (2024-01-04T23:05:39Z) - Balancing detectability and performance of attacks on the control
channel of Markov Decision Processes [77.66954176188426]
We investigate the problem of designing optimal stealthy poisoning attacks on the control channel of Markov decision processes (MDPs)
This research is motivated by the recent interest of the research community for adversarial and poisoning attacks applied to MDPs, and reinforcement learning (RL) methods.
arXiv Detail & Related papers (2021-09-15T09:13:10Z) - The Impact of Algorithmic Risk Assessments on Human Predictions and its
Analysis via Crowdsourcing Studies [79.66833203975729]
We conduct a vignette study in which laypersons are tasked with predicting future re-arrests.
Our key findings are as follows: Participants often predict that an offender will be rearrested even when they deem the likelihood of re-arrest to be well below 50%.
Judicial decisions, unlike participants' predictions, depend in part on factors that are to the likelihood of re-arrest.
arXiv Detail & Related papers (2021-09-03T11:09:10Z) - Machine learning for risk assessment in gender-based crime [0.0]
We propose to apply Machine Learning (ML) techniques to create models that accurately predict the recidivism risk of a gender-violence offender.
The relevance of this work is threefold: (i) the proposed ML method outperforms the preexisting risk assessment algorithm based on classical statistical techniques, (ii) the study has been conducted through an official specific-purpose database with more than 40,000 reports of gender violence, and (iii) two new quality measures are proposed for assessing the effective police protection that a model supplies and the overload in the invested resources that it generates.
arXiv Detail & Related papers (2021-06-22T15:05:20Z) - Equality before the Law: Legal Judgment Consistency Analysis for
Fairness [55.91612739713396]
In this paper, we propose an evaluation metric for judgment inconsistency, Legal Inconsistency Coefficient (LInCo)
We simulate judges from different groups with legal judgment prediction (LJP) models and measure the judicial inconsistency with the disagreement of the judgment results given by LJP models trained on different groups.
We employ LInCo to explore the inconsistency in real cases and come to the following observations: (1) Both regional and gender inconsistency exist in the legal system, but gender inconsistency is much less than regional inconsistency.
arXiv Detail & Related papers (2021-03-25T14:28:00Z) - Experimental Evaluation of Algorithm-Assisted Human Decision-Making:
Application to Pretrial Public Safety Assessment [0.8749675983608171]
We develop a statistical methodology for experimentally evaluating the causal impacts of algorithmic recommendations on human decisions.
We apply the proposed methodology to preliminary data from the first-ever randomized controlled trial.
We find that providing the PSA to the judge has little overall impact on the judge's decisions and subsequent arrestee behavior.
arXiv Detail & Related papers (2020-12-04T20:48:44Z) - Overcoming Failures of Imagination in AI Infused System Development and
Deployment [71.9309995623067]
NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure"
We argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
arXiv Detail & Related papers (2020-11-26T18:09:52Z) - A Risk Assessment of a Pretrial Risk Assessment Tool: Tussles,
Mitigation Strategies, and Inherent Limits [0.0]
We perform a risk assessment of the Public Safety Assessment (PSA), a software used in San Francisco and other jurisdictions to assist judges in deciding whether defendants need to be detained before their trial.
We articulate benefits and limitations of the PSA solution, as well as suggest mitigation strategies.
We then draft the Handoff Tree, a novel algorithmic approach to pretrial justice that accommodates some of the inherent limitations of risk assessment tools by design.
arXiv Detail & Related papers (2020-05-14T23:56:57Z) - Fairness Evaluation in Presence of Biased Noisy Labels [84.12514975093826]
We propose a sensitivity analysis framework for assessing how assumptions on the noise across groups affect the predictive bias properties of the risk assessment model.
Our experimental results on two real world criminal justice data sets demonstrate how even small biases in the observed labels may call into question the conclusions of an analysis based on the noisy outcome.
arXiv Detail & Related papers (2020-03-30T20:47:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.