Feedback Effects in Repeat-Use Criminal Risk Assessments
- URL: http://arxiv.org/abs/2011.14075v1
- Date: Sat, 28 Nov 2020 06:40:05 GMT
- Title: Feedback Effects in Repeat-Use Criminal Risk Assessments
- Authors: Benjamin Laufer
- Abstract summary: We show that risk can propagate over sequential decisions in ways that are not captured by one-shot tests.
Risk assessment tools operate in a highly complex and path-dependent process, fraught with historical inequity.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the criminal legal context, risk assessment algorithms are touted as
data-driven, well-tested tools. Studies known as validation tests are typically
cited by practitioners to show that a particular risk assessment algorithm has
predictive accuracy, establishes legitimate differences between risk groups,
and maintains some measure of group fairness in treatment. To establish these
important goals, most tests use a one-shot, single-point measurement. Using a
Polya Urn model, we explore the implication of feedback effects in sequential
scoring-decision processes. We show through simulation that risk can propagate
over sequential decisions in ways that are not captured by one-shot tests. For
example, even a very small or undetectable level of bias in risk allocation can
amplify over sequential risk-based decisions, leading to observable group
differences after a number of decision iterations. Risk assessment tools
operate in a highly complex and path-dependent process, fraught with historical
inequity. We conclude from this study that these tools do not properly account
for compounding effects, and require new approaches to development and
auditing.
Related papers
- Data-driven decision-making under uncertainty with entropic risk measure [5.407319151576265]
The entropic risk measure is widely used in high-stakes decision making to account for tail risks associated with an uncertain loss.
To debias the empirical entropic risk estimator, we propose a strongly consistent bootstrapping procedure.
We show that cross validation methods can result in significantly higher out-of-sample risk for the insurer if the bias in validation performance is not corrected for.
arXiv Detail & Related papers (2024-09-30T04:02:52Z) - Controlling Risk of Retrieval-augmented Generation: A Counterfactual Prompting Framework [77.45983464131977]
We focus on how likely it is that a RAG model's prediction is incorrect, resulting in uncontrollable risks in real-world applications.
Our research identifies two critical latent factors affecting RAG's confidence in its predictions.
We develop a counterfactual prompting framework that induces the models to alter these factors and analyzes the effect on their answers.
arXiv Detail & Related papers (2024-09-24T14:52:14Z) - Data-Adaptive Tradeoffs among Multiple Risks in Distribution-Free Prediction [55.77015419028725]
We develop methods that permit valid control of risk when threshold and tradeoff parameters are chosen adaptively.
Our methodology supports monotone and nearly-monotone risks, but otherwise makes no distributional assumptions.
arXiv Detail & Related papers (2024-03-28T17:28:06Z) - On (assessing) the fairness of risk score models [2.0646127669654826]
Risk models are of interest for a number of reasons, including the fact that they communicate uncertainty about the potential outcomes to users.
We identify the provision of similar value to different groups as a key desideratum for risk score fairness.
We introduce a novel calibration error metric that is less sample size-biased than previously proposed metrics.
arXiv Detail & Related papers (2023-02-17T12:45:51Z) - Improved Policy Evaluation for Randomized Trials of Algorithmic Resource
Allocation [54.72195809248172]
We present a new estimator leveraging our proposed novel concept, that involves retrospective reshuffling of participants across experimental arms at the end of an RCT.
We prove theoretically that such an estimator is more accurate than common estimators based on sample means.
arXiv Detail & Related papers (2023-02-06T05:17:22Z) - Risk-aware linear bandits with convex loss [0.0]
We propose an optimistic UCB algorithm to learn optimal risk-aware actions, with regret guarantees similar to those of generalized linear bandits.
This approach requires solving a convex problem at each round of the algorithm, which we can relax by allowing only approximated solution obtained by online gradient descent.
arXiv Detail & Related papers (2022-09-15T09:09:53Z) - Mitigating multiple descents: A model-agnostic framework for risk
monotonization [84.6382406922369]
We develop a general framework for risk monotonization based on cross-validation.
We propose two data-driven methodologies, namely zero- and one-step, that are akin to bagging and boosting.
arXiv Detail & Related papers (2022-05-25T17:41:40Z) - A Survey of Risk-Aware Multi-Armed Bandits [84.67376599822569]
We review various risk measures of interest, and comment on their properties.
We consider algorithms for the regret minimization setting, where the exploration-exploitation trade-off manifests.
We conclude by commenting on persisting challenges and fertile areas for future research.
arXiv Detail & Related papers (2022-05-12T02:20:34Z) - Two steps to risk sensitivity [4.974890682815778]
conditional value-at-risk (CVaR) is a risk measure for modeling human and animal planning.
We adopt a conventional distributional approach to CVaR in a sequential setting and reanalyze the choices of human decision-makers.
We then consider a further critical property of risk sensitivity, namely time consistency, showing alternatives to this form of CVaR.
arXiv Detail & Related papers (2021-11-12T16:27:47Z) - Uncertainty-aware Score Distribution Learning for Action Quality
Assessment [91.05846506274881]
We propose an uncertainty-aware score distribution learning (USDL) approach for action quality assessment (AQA)
Specifically, we regard an action as an instance associated with a score distribution, which describes the probability of different evaluated scores.
Under the circumstance where fine-grained score labels are available, we devise a multi-path uncertainty-aware score distributions learning (MUSDL) method to explore the disentangled components of a score.
arXiv Detail & Related papers (2020-06-13T15:41:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.