AutoLAW: Augmented Legal Reasoning through Legal Precedent Prediction
- URL: http://arxiv.org/abs/2106.16034v1
- Date: Wed, 30 Jun 2021 13:01:33 GMT
- Title: AutoLAW: Augmented Legal Reasoning through Legal Precedent Prediction
- Authors: Robert Zev Mahari
- Abstract summary: This paper demonstrate how NLP can be used to address an unmet need of the legal community and increase access to justice.
The paper introduces Legal Precedent Prediction (LPP), the task of predicting relevant passages from precedential court decisions given the context of a legal argument.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper demonstrate how NLP can be used to address an unmet need of the
legal community and increase access to justice. The paper introduces Legal
Precedent Prediction (LPP), the task of predicting relevant passages from
precedential court decisions given the context of a legal argument. To this
end, the paper showcases a BERT model, trained on 530,000 examples of legal
arguments made by U.S. federal judges, to predict relevant passages from
precedential court decisions given the context of a legal argument. In 96% of
unseen test examples the correct target passage is among the top-10 predicted
passages. The same model is able to predict relevant precedent given a short
summary of a complex and unseen legal brief, predicting the precedent that was
actually cited by the brief's co-author, former U.S. Solicitor General and
current U.S. Supreme Court Justice Elena Kagan.
Related papers
- DELTA: Pre-train a Discriminative Encoder for Legal Case Retrieval via Structural Word Alignment [55.91429725404988]
We introduce DELTA, a discriminative model designed for legal case retrieval.
We leverage shallow decoders to create information bottlenecks, aiming to enhance the representation ability.
Our approach can outperform existing state-of-the-art methods in legal case retrieval.
arXiv Detail & Related papers (2024-03-27T10:40:14Z) - Towards Explainability in Legal Outcome Prediction Models [64.00172507827499]
We argue that precedent is a natural way of facilitating explainability for legal NLP models.
By developing a taxonomy of legal precedent, we are able to compare human judges and neural models.
We find that while the models learn to predict outcomes reasonably well, their use of precedent is unlike that of human judges.
arXiv Detail & Related papers (2024-03-25T15:15:41Z) - PILOT: Legal Case Outcome Prediction with Case Law [43.680862577060765]
We identify two unique challenges in making legal case outcome predictions with case law.
First, it is crucial to identify relevant precedent cases that serve as fundamental evidence for judges during decision-making.
Second, it is necessary to consider the evolution of legal principles over time, as early cases may adhere to different legal contexts.
arXiv Detail & Related papers (2024-01-28T21:18:05Z) - LePaRD: A Large-Scale Dataset of Judges Citing Precedents [11.163288406795335]
LePaRD is a massive collection of U.S. federal judicial citations to precedent in context.
Legal passage prediction seeks to predict relevant passages from precedential court decisions.
A subset of the LePaRD dataset is freely available and the whole dataset will be released upon publication.
arXiv Detail & Related papers (2023-11-15T20:33:27Z) - Precedent-Enhanced Legal Judgment Prediction with LLM and Domain-Model
Collaboration [52.57055162778548]
Legal Judgment Prediction (LJP) has become an increasingly crucial task in Legal AI.
Precedents are the previous legal cases with similar facts, which are the basis for the judgment of the subsequent case in national legal systems.
Recent advances in deep learning have enabled a variety of techniques to be used to solve the LJP task.
arXiv Detail & Related papers (2023-10-13T16:47:20Z) - Prototype-Based Interpretability for Legal Citation Prediction [16.660004925391842]
We design the task with parallels to the thought-process of lawyers, i.e., with reference to both precedents and legislative provisions.
After initial experimental results, we refine the target citation predictions with the feedback of legal experts.
We introduce a prototype architecture to add interpretability, achieving strong performance while adhering to decision parameters used by lawyers.
arXiv Detail & Related papers (2023-05-25T21:40:58Z) - Predicting Indian Supreme Court Judgments, Decisions, Or Appeals [0.403831199243454]
We introduce our newly developed ML-enabled legal prediction model and its operational prototype, eLegPredict.
eLegPredict is trained and tested over 3072 supreme court cases and has achieved 76% accuracy (F1-score)
The eLegPredict is equipped with a mechanism to aid end users, where as soon as a document with new case description is dropped into a designated directory, the system quickly reads through its content and generates prediction.
arXiv Detail & Related papers (2021-09-28T18:28:43Z) - Lawformer: A Pre-trained Language Model for Chinese Legal Long Documents [56.40163943394202]
We release the Longformer-based pre-trained language model, named as Lawformer, for Chinese legal long documents understanding.
We evaluate Lawformer on a variety of LegalAI tasks, including judgment prediction, similar case retrieval, legal reading comprehension, and legal question answering.
arXiv Detail & Related papers (2021-05-09T09:39:25Z) - What About the Precedent: An Information-Theoretic Analysis of Common
Law [64.49276556192073]
In common law, the outcome of a new case is determined mostly by precedent cases, rather than existing statutes.
We are the first to approach this question by comparing two longstanding jurisprudential views.
We find that the precedent's arguments share 0.38 nats of information with the case's outcome, whereas precedent's facts only share 0.18 nats of information.
arXiv Detail & Related papers (2021-04-25T11:20:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.