Legal Syllogism Prompting: Teaching Large Language Models for Legal
Judgment Prediction
- URL: http://arxiv.org/abs/2307.08321v1
- Date: Mon, 17 Jul 2023 08:38:46 GMT
- Title: Legal Syllogism Prompting: Teaching Large Language Models for Legal
Judgment Prediction
- Authors: Cong Jiang and Xiaolei Yang
- Abstract summary: Legal syllogism prompting (LoT) is a simple prompting method to teach large language models for legal judgment prediction.
LoT teaches only that in the legal syllogism the major premise is law, the minor premise is the fact, and the conclusion is judgment.
Our results show that LLMs with LoT achieve better performance than the baseline and chain of thought prompting.
- Score: 0.6091702876917281
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Legal syllogism is a form of deductive reasoning commonly used by legal
professionals to analyze cases. In this paper, we propose legal syllogism
prompting (LoT), a simple prompting method to teach large language models
(LLMs) for legal judgment prediction. LoT teaches only that in the legal
syllogism the major premise is law, the minor premise is the fact, and the
conclusion is judgment. Then the models can produce a syllogism reasoning of
the case and give the judgment without any learning, fine-tuning, or examples.
On CAIL2018, a Chinese criminal case dataset, we performed zero-shot judgment
prediction experiments with GPT-3 models. Our results show that LLMs with LoT
achieve better performance than the baseline and chain of thought prompting,
the state-of-art prompting method on diverse reasoning tasks. LoT enables the
model to concentrate on the key information relevant to the judgment and to
correctly understand the legal meaning of acts, as compared to other methods.
Our method enables LLMs to predict judgment along with law articles and
justification, which significantly enhances the explainability of models.
Related papers
- SyLeR: A Framework for Explicit Syllogistic Legal Reasoning in Large Language Models [5.501226256903341]
We propose SyLeR, a novel framework that empowers LLMs to engage in explicit syllogistic legal reasoning.
SyLeR integrates a tree-structured hierarchical retrieval mechanism to effectively combine relevant legal statutes and precedent cases.
arXiv Detail & Related papers (2025-04-05T03:34:51Z) - AnnoCaseLaw: A Richly-Annotated Dataset For Benchmarking Explainable Legal Judgment Prediction [56.797874973414636]
AnnoCaseLaw is a first-of-its-kind dataset of 471 meticulously annotated U.S. Appeals Court negligence cases.
Our dataset lays the groundwork for more human-aligned, explainable Legal Judgment Prediction models.
Results demonstrate that LJP remains a formidable task, with application of legal precedent proving particularly difficult.
arXiv Detail & Related papers (2025-02-28T19:14:48Z) - Artificial Intelligence and Legal Analysis: Implications for Legal Education and the Profession [0.0]
This article reports the results of a study examining the ability of legal and nonlegal Large Language Models to perform legal analysis.
The results show that LLMs can conduct basic IRAC analysis, but are limited by brief responses lacking detail, an inability to commit to answers, false confidence, and hallucinations.
arXiv Detail & Related papers (2025-02-04T19:50:48Z) - Legal Evalutions and Challenges of Large Language Models [42.51294752406578]
We use the OPENAI o1 model as a case study to evaluate the performance of large models in applying legal provisions.
We compare current state-of-the-art LLMs, including open-source, closed-source, and legal-specific models trained specifically for the legal domain.
arXiv Detail & Related papers (2024-11-15T12:23:12Z) - Topic Modelling Case Law Using a Large Language Model and a New Taxonomy for UK Law: AI Insights into Summary Judgment [0.0]
This paper develops and applies a novel taxonomy for topic modelling summary judgment cases in the United Kingdom.
Using a curated dataset of summary judgment cases, we use the Large Language Model Claude 3 Opus to explore functional topics and trends.
We find that Claude 3 Opus correctly classified the topic with an accuracy of 87.10%.
arXiv Detail & Related papers (2024-05-21T16:30:25Z) - DELTA: Pre-train a Discriminative Encoder for Legal Case Retrieval via Structural Word Alignment [55.91429725404988]
We introduce DELTA, a discriminative model designed for legal case retrieval.
We leverage shallow decoders to create information bottlenecks, aiming to enhance the representation ability.
Our approach can outperform existing state-of-the-art methods in legal case retrieval.
arXiv Detail & Related papers (2024-03-27T10:40:14Z) - Towards Explainability in Legal Outcome Prediction Models [64.00172507827499]
We argue that precedent is a natural way of facilitating explainability for legal NLP models.
By developing a taxonomy of legal precedent, we are able to compare human judges and neural models.
We find that while the models learn to predict outcomes reasonably well, their use of precedent is unlike that of human judges.
arXiv Detail & Related papers (2024-03-25T15:15:41Z) - LegalDuet: Learning Fine-grained Representations for Legal Judgment Prediction via a Dual-View Contrastive Learning [22.59356182108378]
Legal Judgment Prediction (LJP) is a fundamental task of legal artificial intelligence, aiming to automatically predict the judgment outcomes of legal cases.
Existing LJP models primarily focus on identifying legal triggers within criminal fact descriptions.
We propose LegalDuet, which continuously pretrains language models to learn a more tailored embedding space for representing legal cases.
arXiv Detail & Related papers (2024-01-27T10:28:27Z) - Precedent-Enhanced Legal Judgment Prediction with LLM and Domain-Model
Collaboration [52.57055162778548]
Legal Judgment Prediction (LJP) has become an increasingly crucial task in Legal AI.
Precedents are the previous legal cases with similar facts, which are the basis for the judgment of the subsequent case in national legal systems.
Recent advances in deep learning have enabled a variety of techniques to be used to solve the LJP task.
arXiv Detail & Related papers (2023-10-13T16:47:20Z) - Unlocking Practical Applications in Legal Domain: Evaluation of GPT for
Zero-Shot Semantic Annotation of Legal Texts [0.0]
We evaluate the capability of a state-of-the-art generative pre-trained transformer (GPT) model to perform semantic annotation of short text snippets.
We found that the GPT model performs surprisingly well in zero-shot settings on diverse types of documents.
arXiv Detail & Related papers (2023-05-08T01:55:53Z) - Exploiting Contrastive Learning and Numerical Evidence for Confusing
Legal Judgment Prediction [46.71918729837462]
Given the fact description text of a legal case, legal judgment prediction aims to predict the case's charge, law article and penalty term.
Previous studies fail to distinguish different classification errors with a standard cross-entropy classification loss.
We propose a moco-based supervised contrastive learning to learn distinguishable representations.
We further enhance the representation of the fact description with extracted crime amounts which are encoded by a pre-trained numeracy model.
arXiv Detail & Related papers (2022-11-15T15:53:56Z) - Do Charge Prediction Models Learn Legal Theory? [59.74220430434435]
We argue that trustworthy charge prediction models should take legal theories into consideration.
We propose three principles for trustworthy models should follow in this task, which are sensitive, selective, and presumption of innocence.
Our findings indicate that, while existing charge prediction models meet the selective principle on a benchmark dataset, most of them are still not sensitive enough and do not satisfy the presumption of innocence.
arXiv Detail & Related papers (2022-10-31T07:32:12Z) - Lawformer: A Pre-trained Language Model for Chinese Legal Long Documents [56.40163943394202]
We release the Longformer-based pre-trained language model, named as Lawformer, for Chinese legal long documents understanding.
We evaluate Lawformer on a variety of LegalAI tasks, including judgment prediction, similar case retrieval, legal reading comprehension, and legal question answering.
arXiv Detail & Related papers (2021-05-09T09:39:25Z) - A Dataset for Statutory Reasoning in Tax Law Entailment and Question
Answering [37.66486350122862]
This paper investigates the performance of natural language understanding approaches on statutory reasoning.
We introduce a dataset, together with a legal-domain text corpus.
We contrast this with a hand-constructed Prolog-based system, designed to fully solve the task.
arXiv Detail & Related papers (2020-05-11T16:54:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.