Improving the Fairness of Deep-Learning, Short-term Crime Prediction with Under-reporting-aware Models
- URL: http://arxiv.org/abs/2406.04382v2
- Date: Thu, 13 Jun 2024 17:53:01 GMT
- Title: Improving the Fairness of Deep-Learning, Short-term Crime Prediction with Under-reporting-aware Models
- Authors: Jiahui Wu, Vanessa Frias-Martinez,
- Abstract summary: We propose a novel deep learning architecture that combines the power of two approaches to increase prediction fairness.
Our results show that the proposed model improves the fairness of crime predictions when compared to models with in-processing de-biasing approaches.
- Score: 1.1062397685574308
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning crime predictive tools use past crime data and additional behavioral datasets to forecast future crimes. Nevertheless, these tools have been shown to suffer from unfair predictions across minority racial and ethnic groups. Current approaches to address this unfairness generally propose either pre-processing methods that mitigate the bias in the training datasets by applying corrections to crime counts based on domain knowledge or in-processing methods that are implemented as fairness regularizers to optimize for both accuracy and fairness. In this paper, we propose a novel deep learning architecture that combines the power of these two approaches to increase prediction fairness. Our results show that the proposed model improves the fairness of crime predictions when compared to models with in-processing de-biasing approaches and with models without any type of bias correction, albeit at the cost of reducing accuracy.
Related papers
- Achieving Fairness in Predictive Process Analytics via Adversarial Learning [50.31323204077591]
This paper addresses the challenge of integrating a debiasing phase into predictive business process analytics.
Our framework leverages on adversial debiasing is evaluated on four case studies, showing a significant reduction in the contribution of biased variables to the predicted value.
arXiv Detail & Related papers (2024-10-03T15:56:03Z) - Uncertainty-Guided Enhancement on Driving Perception System via Foundation Models [37.35848849961951]
We develop a method that leverages foundation models to refine predictions from existing driving perception models.
The method demonstrates a 10 to 15 percent improvement in prediction accuracy and reduces the number of queries to the foundation model by 50 percent.
arXiv Detail & Related papers (2024-10-02T00:46:19Z) - Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - Inference-Time Selective Debiasing [27.578390085427156]
We propose selective debiasing -- an inference-time safety mechanism that aims to increase the overall quality of models.
We identify the potentially biased model predictions and, instead of discarding them, we debias them using LEACE -- a post-processing debiasing method.
Experiments with text classification datasets demonstrate that selective debiasing helps to close the performance gap between post-processing methods and at-training and pre-processing debiasing techniques.
arXiv Detail & Related papers (2024-07-27T21:56:23Z) - Low-rank finetuning for LLMs: A fairness perspective [54.13240282850982]
Low-rank approximation techniques have become the de facto standard for fine-tuning Large Language Models.
This paper investigates the effectiveness of these methods in capturing the shift of fine-tuning datasets from the initial pre-trained data distribution.
We show that low-rank fine-tuning inadvertently preserves undesirable biases and toxic behaviors.
arXiv Detail & Related papers (2024-05-28T20:43:53Z) - When Fairness Meets Privacy: Exploring Privacy Threats in Fair Binary Classifiers via Membership Inference Attacks [17.243744418309593]
We propose an efficient MIA method against fairness-enhanced models based on fairness discrepancy results.
We also explore potential strategies for mitigating privacy leakages.
arXiv Detail & Related papers (2023-11-07T10:28:17Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Learning Sample Difficulty from Pre-trained Models for Reliable
Prediction [55.77136037458667]
We propose to utilize large-scale pre-trained models to guide downstream model training with sample difficulty-aware entropy regularization.
We simultaneously improve accuracy and uncertainty calibration across challenging benchmarks.
arXiv Detail & Related papers (2023-04-20T07:29:23Z) - Travel Demand Forecasting: A Fair AI Approach [0.9383397937755517]
We propose a novel methodology to develop fairness-aware, highly-accurate travel demand forecasting models.
Specifically, we introduce a new fairness regularization term, which is explicitly designed to measure the correlation between prediction accuracy and protected attributes.
Results highlight that our proposed methodology can effectively enhance fairness for multiple protected attributes while preserving prediction accuracy.
arXiv Detail & Related papers (2023-03-03T03:16:54Z) - Perturbed and Strict Mean Teachers for Semi-supervised Semantic
Segmentation [22.5935068122522]
In this paper, we address the prediction accuracy problem of consistency learning methods with novel extensions of the mean-teacher (MT) model.
The accurate prediction by this model allows us to use a challenging combination of network, input data and feature perturbations to improve the consistency learning generalisation.
Results on public benchmarks show that our approach achieves remarkable improvements over the previous SOTA methods in the field.
arXiv Detail & Related papers (2021-11-25T04:30:56Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.