On the Role of Negative Precedent in Legal Outcome Prediction
- URL: http://arxiv.org/abs/2208.08225v2
- Date: Thu, 6 Oct 2022 09:29:42 GMT
- Title: On the Role of Negative Precedent in Legal Outcome Prediction
- Authors: Josef Valvoda, Ryan Cotterell, Simone Teufel
- Abstract summary: Legal outcome prediction, the prediction of positive outcome, is an increasingly popular task in AI.
We turn our focus to negative outcomes here, and introduce a new task of negative outcome prediction.
We discover an asymmetry in existing models' ability to predict positive and negative outcomes.
We develop two new models inspired by the dynamics of a court process.
- Score: 65.30798081417115
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Every legal case sets a precedent by developing the law in one of the
following two ways. It either expands its scope, in which case it sets positive
precedent, or it narrows it, in which case it sets negative precedent. Legal
outcome prediction, the prediction of positive outcome, is an increasingly
popular task in AI. In contrast, we turn our focus to negative outcomes here,
and introduce a new task of negative outcome prediction. We discover an
asymmetry in existing models' ability to predict positive and negative
outcomes. Where the state-of-the-art outcome prediction model we used predicts
positive outcomes at 75.06 F1, it predicts negative outcomes at only 10.09 F1,
worse than a random baseline. To address this performance gap, we develop two
new models inspired by the dynamics of a court process. Our first model
significantly improves positive outcome prediction score to 77.15 F1 and our
second model more than doubles the negative outcome prediction performance to
24.01 F1. Despite this improvement, shifting focus to negative outcomes reveals
that there is still much room for improvement for outcome prediction models.
Related papers
- Building Defect Prediction Models by Online Learning Considering Defect Overlooking [1.5869998695491834]
Building defect prediction models based on online learning can enhance prediction accuracy.
A module predicted as "non-defective" can result in fewer test cases for such modules.
erroneous test results are used as learning data by online learning, which could negatively affect prediction accuracy.
arXiv Detail & Related papers (2024-04-17T03:20:46Z) - ExtremeCast: Boosting Extreme Value Prediction for Global Weather Forecast [57.6987191099507]
We introduce Exloss, a novel loss function that performs asymmetric optimization and highlights extreme values to obtain accurate extreme weather forecast.
We also introduce ExBooster, which captures the uncertainty in prediction outcomes by employing multiple random samples.
Our solution can achieve state-of-the-art performance in extreme weather prediction, while maintaining the overall forecast accuracy comparable to the top medium-range forecast models.
arXiv Detail & Related papers (2024-02-02T10:34:13Z) - Linking a predictive model to causal effect estimation [21.869233469885856]
This paper first tackles the challenge of estimating the causal effect of any feature (as the treatment) on the outcome w.r.t. a given instance.
The theoretical results naturally link a predictive model to causal effect estimations and imply that a predictive model is causally interpretable.
We use experiments to demonstrate that various types of predictive models, when satisfying the conditions identified in this paper, can estimate the causal effects of features as accurately as state-of-the-art causal effect estimation methods.
arXiv Detail & Related papers (2023-04-10T13:08:16Z) - Pathologies of Pre-trained Language Models in Few-shot Fine-tuning [50.3686606679048]
We show that pre-trained language models with few examples show strong prediction bias across labels.
Although few-shot fine-tuning can mitigate the prediction bias, our analysis shows models gain performance improvement by capturing non-task-related features.
These observations alert that pursuing model performance with fewer examples may incur pathological prediction behavior.
arXiv Detail & Related papers (2022-04-17T15:55:18Z) - Constrained Generalized Additive 2 Model with Consideration of
High-Order Interactions [0.39146761527401425]
We propose CGA2M+, which is based on the Generalized Additive 2 Model (GA2M)
In this study, we propose CGA2M+, which is based on the Generalized Additive 2 Model (GA2M)
arXiv Detail & Related papers (2021-06-05T08:31:20Z) - Sample Selection Bias in Evaluation of Prediction Performance of Causal
Models [0.0]
Causal models are notoriously difficult to validate because they make untestable assumptions regarding confounding.
We revisit the prediction performance of several recently proposed causal models tested on a genetic perturbation data set of Kemmeren.
We find that sample selection bias is likely a key driver of model performance.
arXiv Detail & Related papers (2021-06-03T15:15:30Z) - Positive-Congruent Training: Towards Regression-Free Model Updates [87.25247195148187]
In image classification, sample-wise inconsistencies appear as "negative flips"
A new model incorrectly predicts the output for a test sample that was correctly classified by the old (reference) model.
We propose a simple approach for PC training, Focal Distillation, which enforces congruence with the reference model.
arXiv Detail & Related papers (2020-11-18T09:00:44Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z) - Counterfactual Predictions under Runtime Confounding [74.90756694584839]
We study the counterfactual prediction task in the setting where all relevant factors are captured in the historical data.
We propose a doubly-robust procedure for learning counterfactual prediction models in this setting.
arXiv Detail & Related papers (2020-06-30T15:49:05Z) - Performative Prediction [31.876692592395777]
We develop a framework for performative prediction bringing together concepts from statistics, game theory, and causality.
A conceptual novelty is an equilibrium notion we call performative stability.
Our main results are necessary and sufficient conditions for the convergence of retraining to a performatively stable point of nearly minimal loss.
arXiv Detail & Related papers (2020-02-16T20:29:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.