Machine learning for risk assessment in gender-based crime
- URL: http://arxiv.org/abs/2106.11847v1
- Date: Tue, 22 Jun 2021 15:05:20 GMT
- Title: Machine learning for risk assessment in gender-based crime
- Authors: \'Angel Gonz\'alez-Prieto, Antonio Br\'u, Juan Carlos Nu\~no, Jos\'e
Luis Gonz\'alez-\'Alvarez
- Abstract summary: We propose to apply Machine Learning (ML) techniques to create models that accurately predict the recidivism risk of a gender-violence offender.
The relevance of this work is threefold: (i) the proposed ML method outperforms the preexisting risk assessment algorithm based on classical statistical techniques, (ii) the study has been conducted through an official specific-purpose database with more than 40,000 reports of gender violence, and (iii) two new quality measures are proposed for assessing the effective police protection that a model supplies and the overload in the invested resources that it generates.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gender-based crime is one of the most concerning scourges of contemporary
society. Governments worldwide have invested lots of economic and human
resources to radically eliminate this threat. Despite these efforts, providing
accurate predictions of the risk that a victim of gender violence has of being
attacked again is still a very hard open problem. The development of new
methods for issuing accurate, fair and quick predictions would allow police
forces to select the most appropriate measures to prevent recidivism. In this
work, we propose to apply Machine Learning (ML) techniques to create models
that accurately predict the recidivism risk of a gender-violence offender. The
relevance of the contribution of this work is threefold: (i) the proposed ML
method outperforms the preexisting risk assessment algorithm based on classical
statistical techniques, (ii) the study has been conducted through an official
specific-purpose database with more than 40,000 reports of gender violence, and
(iii) two new quality measures are proposed for assessing the effective police
protection that a model supplies and the overload in the invested resources
that it generates. Additionally, we propose a hybrid model that combines the
statistical prediction methods with the ML method, permitting authorities to
implement a smooth transition from the preexisting model to the ML-based model.
This hybrid nature enables a decision-making process to optimally balance
between the efficiency of the police system and aggressiveness of the
protection measures taken.
Related papers
- SafeMLRM: Demystifying Safety in Multi-modal Large Reasoning Models [50.34706204154244]
Acquiring reasoning capabilities catastrophically degrades inherited safety alignment.
Certain scenarios suffer 25 times higher attack rates.
Despite tight reasoning-answer safety coupling, MLRMs demonstrate nascent self-correction.
arXiv Detail & Related papers (2025-04-09T06:53:23Z) - Representation-based Reward Modeling for Efficient Safety Alignment of Large Language Model [84.00480999255628]
Reinforcement Learning algorithms for safety alignment of Large Language Models (LLMs) encounter the challenge of distribution shift.
Current approaches typically address this issue through online sampling from the target policy.
We propose a new framework that leverages the model's intrinsic safety judgment capability to extract reward signals.
arXiv Detail & Related papers (2025-03-13T06:40:34Z) - Model Tampering Attacks Enable More Rigorous Evaluations of LLM Capabilities [49.09703018511403]
Evaluations of large language model (LLM) risks and capabilities are increasingly being incorporated into AI risk management and governance frameworks.
Currently, most risk evaluations are conducted by designing inputs that elicit harmful behaviors from the system.
We propose evaluating LLMs with model tampering attacks which allow for modifications to latent activations or weights.
arXiv Detail & Related papers (2025-02-03T18:59:16Z) - Achieving $\widetilde{\mathcal{O}}(\sqrt{T})$ Regret in Average-Reward POMDPs with Known Observation Models [56.92178753201331]
We tackle average-reward infinite-horizon POMDPs with an unknown transition model.
We present a novel and simple estimator that overcomes this barrier.
arXiv Detail & Related papers (2025-01-30T22:29:41Z) - Predicting Femicide in Veracruz: A Fuzzy Logic Approach with the Expanded MFM-FEM-VER-CP-2024 Model [0.0]
The article focuses on the urgent issue of femicide in Veracruz, Mexico, and the development of the MFM_FEM_VER_CP024 model.
This model addresses the complexity and uncertainty inherent in gender based violence by formalizing risk factors such as coercive control, dehumanization, and the cycle of violence.
arXiv Detail & Related papers (2024-08-31T06:00:49Z) - Evaluating the Effectiveness of Index-Based Treatment Allocation [42.040099398176665]
When resources are scarce, an allocation policy is needed to decide who receives a resource.
This paper introduces methods to evaluate index-based allocation policies using data from a randomized control trial.
arXiv Detail & Related papers (2024-02-19T01:55:55Z) - Identifying Risk Patterns in Brazilian Police Reports Preceding
Femicides: A Long Short Term Memory (LSTM) Based Analysis [0.0]
Femicide refers to the killing of a female victim, often perpetrated by an intimate partner or family member, and is also associated with gender-based violence.
In this study, we employed the Long Short Term Memory (LSTM) technique to identify patterns of behavior in Brazilian police reports preceding femicides.
Our first objective was to classify the content of these reports as indicating either a lower or higher risk of the victim being murdered, achieving an accuracy of 66%.
In the second approach, we developed a model to predict the next action a victim might experience within a sequence of patterned events.
arXiv Detail & Related papers (2024-01-04T23:05:39Z) - Domain Generalization without Excess Empirical Risk [83.26052467843725]
A common approach is designing a data-driven surrogate penalty to capture generalization and minimize the empirical risk jointly with the penalty.
We argue that a significant failure mode of this recipe is an excess risk due to an erroneous penalty or hardness in joint optimization.
We present an approach that eliminates this problem. Instead of jointly minimizing empirical risk with the penalty, we minimize the penalty under the constraint of optimality of the empirical risk.
arXiv Detail & Related papers (2023-08-30T08:46:46Z) - Measuring Bias in AI Models: An Statistical Approach Introducing N-Sigma [19.072543709069087]
We analyze statistical approaches to measure biases in automatic decision-making systems.
We propose a novel way to measure the biases in machine learning models using a statistical approach based on the N-Sigma method.
arXiv Detail & Related papers (2023-04-26T16:49:25Z) - Predicted Embedding Power Regression for Large-Scale Out-of-Distribution
Detection [77.1596426383046]
We develop a novel approach that calculates the probability of the predicted class label based on label distributions learned during the training process.
Our method performs better than current state-of-the-art methods with only a negligible increase in compute cost.
arXiv Detail & Related papers (2023-03-07T18:28:39Z) - Selecting Models based on the Risk of Damage Caused by Adversarial
Attacks [2.969705152497174]
Regulation, legal liabilities, and societal concerns challenge the adoption of AI in safety and security-critical applications.
One of the key concerns is that adversaries can cause harm by manipulating model predictions without being detected.
We propose a method to model and statistically estimate the probability of damage arising from adversarial attacks.
arXiv Detail & Related papers (2023-01-28T10:24:38Z) - Balancing detectability and performance of attacks on the control
channel of Markov Decision Processes [77.66954176188426]
We investigate the problem of designing optimal stealthy poisoning attacks on the control channel of Markov decision processes (MDPs)
This research is motivated by the recent interest of the research community for adversarial and poisoning attacks applied to MDPs, and reinforcement learning (RL) methods.
arXiv Detail & Related papers (2021-09-15T09:13:10Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - SAMBA: Safe Model-Based & Active Reinforcement Learning [59.01424351231993]
SAMBA is a framework for safe reinforcement learning that combines aspects from probabilistic modelling, information theory, and statistics.
We evaluate our algorithm on a variety of safe dynamical system benchmarks involving both low and high-dimensional state representations.
We provide intuition as to the effectiveness of the framework by a detailed analysis of our active metrics and safety constraints.
arXiv Detail & Related papers (2020-06-12T10:40:46Z) - Learning to Predict Error for MRI Reconstruction [67.76632988696943]
We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
arXiv Detail & Related papers (2020-02-13T15:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.