Case Study: Predictive Fairness to Reduce Misdemeanor Recidivism Through
Social Service Interventions
- URL: http://arxiv.org/abs/2001.09233v1
- Date: Fri, 24 Jan 2020 23:52:55 GMT
- Title: Case Study: Predictive Fairness to Reduce Misdemeanor Recidivism Through
Social Service Interventions
- Authors: Kit T. Rodolfa, Erika Salomon, Lauren Haynes, Ivan Higuera Mendieta,
Jamie Larson, Rayid Ghani
- Abstract summary: The Los Angeles City Attorney's Office created a new Recidivism Reduction and Drug Diversion unit (R2D2)
We describe a collaboration with this new unit as a case study for the incorporation of predictive equity into machine learning based decision making.
- Score: 4.651149317838983
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The criminal justice system is currently ill-equipped to improve outcomes of
individuals who cycle in and out of the system with a series of misdemeanor
offenses. Often due to constraints of caseload and poor record linkage, prior
interactions with an individual may not be considered when an individual comes
back into the system, let alone in a proactive manner through the application
of diversion programs. The Los Angeles City Attorney's Office recently created
a new Recidivism Reduction and Drug Diversion unit (R2D2) tasked with reducing
recidivism in this population. Here we describe a collaboration with this new
unit as a case study for the incorporation of predictive equity into machine
learning based decision making in a resource-constrained setting. The program
seeks to improve outcomes by developing individually-tailored social service
interventions (i.e., diversions, conditional plea agreements, stayed
sentencing, or other favorable case disposition based on appropriate social
service linkage rather than traditional sentencing methods) for individuals
likely to experience subsequent interactions with the criminal justice system,
a time and resource-intensive undertaking that necessitates an ability to focus
resources on individuals most likely to be involved in a future case. Seeking
to achieve both efficiency (through predictive accuracy) and equity (improving
outcomes in traditionally under-served communities and working to mitigate
existing disparities in criminal justice outcomes), we discuss the equity
outcomes we seek to achieve, describe the corresponding choice of a metric for
measuring predictive fairness in this context, and explore a set of options for
balancing equity and efficiency when building and selecting machine learning
models in an operational public policy setting.
Related papers
- Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Evidence of What, for Whom? The Socially Contested Role of Algorithmic Bias in a Predictive Policing Tool [0.9821874476902969]
We show that stakeholders from different groups articulate diverse problem diagnoses of the tool's algorithmic bias.
We find that stakeholders use evidence of algorithmic bias to reform the policies around police patrol allocation.
We identify the implicit assumptions and scope of these varied uses of algorithmic bias as evidence.
arXiv Detail & Related papers (2024-05-13T13:03:33Z) - Reduced-Rank Multi-objective Policy Learning and Optimization [57.978477569678844]
In practice, causal researchers do not have a single outcome in mind a priori.
In government-assisted social benefit programs, policymakers collect many outcomes to understand the multidimensional nature of poverty.
We present a data-driven dimensionality-reduction methodology for multiple outcomes in the context of optimal policy learning.
arXiv Detail & Related papers (2024-04-29T08:16:30Z) - Fairness Incentives in Response to Unfair Dynamic Pricing [7.991187769447732]
We design a basic simulated economy, wherein we generate corporate taxation schedules geared to incentivizing firms towards adopting fair pricing behaviours.
To cover a range of possible policy scenarios, we formulate our social planner's learning problem as a multi-armed bandit, a contextual bandit and as a full reinforcement learning (RL) problem.
We find that social welfare improves on that of the fairness-agnostic baseline, and approaches that of the analytically optimal fairness-aware baseline for the multi-armed and contextual bandit settings.
arXiv Detail & Related papers (2024-04-22T23:12:58Z) - Impact of Fairness Regulations on Institutions' Policies and Population Qualifications [9.863310509852402]
We consider a system whose primary objective is to maximize utility by selecting the most qualified individuals.
We examine conditions under which a discrimination penalty can effectively reduce disparity in the selection.
We propose certain conditions that can counteract this undesirable outcome.
arXiv Detail & Related papers (2024-04-06T07:21:41Z) - Fairness in Algorithmic Recourse Through the Lens of Substantive
Equality of Opportunity [15.78130132380848]
Algorithmic recourse has gained attention as a means of giving persons agency in their interactions with AI systems.
Recent work has shown that recourse itself may be unfair due to differences in the initial circumstances of individuals.
Time is a critical element in recourse because the longer it takes an individual to act, the more the setting may change.
arXiv Detail & Related papers (2024-01-29T11:55:45Z) - From the Fair Distribution of Predictions to the Fair Distribution of Social Goods: Evaluating the Impact of Fair Machine Learning on Long-Term Unemployment [3.683202928838613]
We argue that addressing this problem requires a notion of prospective fairness that anticipates the change in the distribution of social goods after deployment.
We are guided by an application from public administration: the use of algorithms to predict who among the recently unemployed will remain unemployed in the long term.
We simulate how such algorithmically informed policies would affect gender inequalities in long-term unemployment.
arXiv Detail & Related papers (2024-01-25T14:17:11Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - Social Diversity Reduces the Complexity and Cost of Fostering Fairness [63.70639083665108]
We investigate the effects of interference mechanisms which assume incomplete information and flexible standards of fairness.
We quantify the role of diversity and show how it reduces the need for information gathering.
Our results indicate that diversity changes and opens up novel mechanisms available to institutions wishing to promote fairness.
arXiv Detail & Related papers (2022-11-18T21:58:35Z) - Algorithmic Fairness and Vertical Equity: Income Fairness with IRS Tax
Audit Models [73.24381010980606]
This study examines issues of algorithmic fairness in the context of systems that inform tax audit selection by the IRS.
We show how the use of more flexible machine learning methods for selecting audits may affect vertical equity.
Our results have implications for the design of algorithmic tools across the public sector.
arXiv Detail & Related papers (2022-06-20T16:27:06Z) - Equality before the Law: Legal Judgment Consistency Analysis for
Fairness [55.91612739713396]
In this paper, we propose an evaluation metric for judgment inconsistency, Legal Inconsistency Coefficient (LInCo)
We simulate judges from different groups with legal judgment prediction (LJP) models and measure the judicial inconsistency with the disagreement of the judgment results given by LJP models trained on different groups.
We employ LInCo to explore the inconsistency in real cases and come to the following observations: (1) Both regional and gender inconsistency exist in the legal system, but gender inconsistency is much less than regional inconsistency.
arXiv Detail & Related papers (2021-03-25T14:28:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.