Predicting Femicide in Veracruz: A Fuzzy Logic Approach with the Expanded MFM-FEM-VER-CP-2024 Model
- URL: http://arxiv.org/abs/2409.00359v1
- Date: Sat, 31 Aug 2024 06:00:49 GMT
- Title: Predicting Femicide in Veracruz: A Fuzzy Logic Approach with the Expanded MFM-FEM-VER-CP-2024 Model
- Authors: Carlos Medel-Ramírez, Hilario Medel-López,
- Abstract summary: The article focuses on the urgent issue of femicide in Veracruz, Mexico, and the development of the MFM_FEM_VER_CP024 model.
This model addresses the complexity and uncertainty inherent in gender based violence by formalizing risk factors such as coercive control, dehumanization, and the cycle of violence.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The article focuses on the urgent issue of femicide in Veracruz, Mexico, and the development of the MFM_FEM_VER_CP_2024 model, a mathematical framework designed to predict femicide risk using fuzzy logic. This model addresses the complexity and uncertainty inherent in gender based violence by formalizing risk factors such as coercive control, dehumanization, and the cycle of violence. These factors are mathematically modeled through membership functions that assess the degree of risk associated with various conditions, including personal relationships and specific acts of violence. The study enhances the original model by incorporating new rules and refining existing membership functions, which significantly improve the model predictive accuracy.
Related papers
- Fragility-aware Classification for Understanding Risk and Improving Generalization [6.926253982569273]
We introduce the Fragility Index (FI), a novel metric that evaluates classification performance from a risk-averse perspective.
We derive exact reformulations for cross-entropy loss, hinge-type loss, and Lipschitz loss, and extend the approach to deep learning models.
arXiv Detail & Related papers (2025-02-18T16:44:03Z) - The Lessons of Developing Process Reward Models in Mathematical Reasoning [62.165534879284735]
Process Reward Models (PRMs) aim to identify and mitigate intermediate errors in the reasoning processes.
We develop a consensus filtering mechanism that effectively integrates Monte Carlo (MC) estimation with Large Language Models (LLMs)
We release a new state-of-the-art PRM that outperforms existing open-source alternatives.
arXiv Detail & Related papers (2025-01-13T13:10:16Z) - Controlling Risk of Retrieval-augmented Generation: A Counterfactual Prompting Framework [77.45983464131977]
We focus on how likely it is that a RAG model's prediction is incorrect, resulting in uncontrollable risks in real-world applications.
Our research identifies two critical latent factors affecting RAG's confidence in its predictions.
We develop a counterfactual prompting framework that induces the models to alter these factors and analyzes the effect on their answers.
arXiv Detail & Related papers (2024-09-24T14:52:14Z) - Provable Risk-Sensitive Distributional Reinforcement Learning with
General Function Approximation [54.61816424792866]
We introduce a general framework on Risk-Sensitive Distributional Reinforcement Learning (RS-DisRL), with static Lipschitz Risk Measures (LRM) and general function approximation.
We design two innovative meta-algorithms: textttRS-DisRL-M, a model-based strategy for model-based function approximation, and textttRS-DisRL-V, a model-free approach for general value function approximation.
arXiv Detail & Related papers (2024-02-28T08:43:18Z) - Distribution-consistency Structural Causal Models [6.276417011421679]
We introduce a novel textitdistribution-consistency assumption, and in alignment with it, we propose the Distribution-consistency Structural Causal Models (DiscoSCMs)
To concretely reveal the enhanced model capacity, we introduce a new identifiable causal parameter, textitthe probability of consistency, which holds practical significance within DiscoSCM alone.
arXiv Detail & Related papers (2024-01-29T06:46:15Z) - Identifying Risk Patterns in Brazilian Police Reports Preceding
Femicides: A Long Short Term Memory (LSTM) Based Analysis [0.0]
Femicide refers to the killing of a female victim, often perpetrated by an intimate partner or family member, and is also associated with gender-based violence.
In this study, we employed the Long Short Term Memory (LSTM) technique to identify patterns of behavior in Brazilian police reports preceding femicides.
Our first objective was to classify the content of these reports as indicating either a lower or higher risk of the victim being murdered, achieving an accuracy of 66%.
In the second approach, we developed a model to predict the next action a victim might experience within a sequence of patterned events.
arXiv Detail & Related papers (2024-01-04T23:05:39Z) - Are Neural Topic Models Broken? [81.15470302729638]
We study the relationship between automated and human evaluation of topic models.
We find that neural topic models fare worse in both respects compared to an established classical method.
arXiv Detail & Related papers (2022-10-28T14:38:50Z) - Towards Assessing and Characterizing the Semantic Robustness of Face
Recognition [55.258476405537344]
Face Recognition Models (FRMs) based on Deep Neural Networks (DNNs) inherit this vulnerability.
We propose a methodology for assessing and characterizing the robustness of FRMs against semantic perturbations to their input.
arXiv Detail & Related papers (2022-02-10T12:22:09Z) - Machine learning for risk assessment in gender-based crime [0.0]
We propose to apply Machine Learning (ML) techniques to create models that accurately predict the recidivism risk of a gender-violence offender.
The relevance of this work is threefold: (i) the proposed ML method outperforms the preexisting risk assessment algorithm based on classical statistical techniques, (ii) the study has been conducted through an official specific-purpose database with more than 40,000 reports of gender violence, and (iii) two new quality measures are proposed for assessing the effective police protection that a model supplies and the overload in the invested resources that it generates.
arXiv Detail & Related papers (2021-06-22T15:05:20Z) - ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
Learning Models [64.03398193325572]
Inference attacks against Machine Learning (ML) models allow adversaries to learn about training data, model parameters, etc.
We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing.
Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models.
arXiv Detail & Related papers (2021-02-04T11:35:13Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - A Causally Formulated Hazard Ratio Estimation through Backdoor
Adjustment on Structural Causal Model [0.98314893665023]
We review existing approaches to compute hazard ratios as well as their causal interpretation, if it exists.
We propose a novel approach to compute hazard ratios from observational studies using backdoor adjustment through SCMs and do-calculus.
arXiv Detail & Related papers (2020-06-22T19:10:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.