Legal Syllogism Prompting: Teaching Large Language Models for Legal
Judgment Prediction
- URL: http://arxiv.org/abs/2307.08321v1
- Date: Mon, 17 Jul 2023 08:38:46 GMT
- Title: Legal Syllogism Prompting: Teaching Large Language Models for Legal
Judgment Prediction
- Authors: Cong Jiang and Xiaolei Yang
- Abstract summary: Legal syllogism prompting (LoT) is a simple prompting method to teach large language models for legal judgment prediction.
LoT teaches only that in the legal syllogism the major premise is law, the minor premise is the fact, and the conclusion is judgment.
Our results show that LLMs with LoT achieve better performance than the baseline and chain of thought prompting.
- Score: 0.6091702876917281
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Legal syllogism is a form of deductive reasoning commonly used by legal
professionals to analyze cases. In this paper, we propose legal syllogism
prompting (LoT), a simple prompting method to teach large language models
(LLMs) for legal judgment prediction. LoT teaches only that in the legal
syllogism the major premise is law, the minor premise is the fact, and the
conclusion is judgment. Then the models can produce a syllogism reasoning of
the case and give the judgment without any learning, fine-tuning, or examples.
On CAIL2018, a Chinese criminal case dataset, we performed zero-shot judgment
prediction experiments with GPT-3 models. Our results show that LLMs with LoT
achieve better performance than the baseline and chain of thought prompting,
the state-of-art prompting method on diverse reasoning tasks. LoT enables the
model to concentrate on the key information relevant to the judgment and to
correctly understand the legal meaning of acts, as compared to other methods.
Our method enables LLMs to predict judgment along with law articles and
justification, which significantly enhances the explainability of models.
Related papers
- Artificial Intelligence and Legal Analysis: Implications for Legal Education and the Profession [0.0]
This article reports the results of a study examining the ability of legal and nonlegal Large Language Models to perform legal analysis.
The results show that LLMs can conduct basic IRAC analysis, but are limited by brief responses lacking detail, an inability to commit to answers, false confidence, and hallucinations.
arXiv Detail & Related papers (2025-02-04T19:50:48Z) - Beyond Guilt: Legal Judgment Prediction with Trichotomous Reasoning [12.589047235741194]
We introduce LJPIV, the first benchmark dataset for Legal Judgment Prediction with Innocent Verdicts.
Adhering to the trichotomous dogmatics, we extend three widely-used legal datasets through LLM-based augmentation and manual verification.
Our experiments with state-of-the-art legal LLMs and novel strategies that integrate trichotomous reasoning into zero-shot prompting and fine-tuning reveal: (1) current legal LLMs have significant room for improvement, with even the best models achieving an F1 score of less than 0.3 on LJPIV.
arXiv Detail & Related papers (2024-12-19T07:14:13Z) - Legal Evalutions and Challenges of Large Language Models [42.51294752406578]
We use the OPENAI o1 model as a case study to evaluate the performance of large models in applying legal provisions.
We compare current state-of-the-art LLMs, including open-source, closed-source, and legal-specific models trained specifically for the legal domain.
arXiv Detail & Related papers (2024-11-15T12:23:12Z) - DELTA: Pre-train a Discriminative Encoder for Legal Case Retrieval via Structural Word Alignment [55.91429725404988]
We introduce DELTA, a discriminative model designed for legal case retrieval.
We leverage shallow decoders to create information bottlenecks, aiming to enhance the representation ability.
Our approach can outperform existing state-of-the-art methods in legal case retrieval.
arXiv Detail & Related papers (2024-03-27T10:40:14Z) - Towards Explainability in Legal Outcome Prediction Models [64.00172507827499]
We argue that precedent is a natural way of facilitating explainability for legal NLP models.
By developing a taxonomy of legal precedent, we are able to compare human judges and neural models.
We find that while the models learn to predict outcomes reasonably well, their use of precedent is unlike that of human judges.
arXiv Detail & Related papers (2024-03-25T15:15:41Z) - Precedent-Enhanced Legal Judgment Prediction with LLM and Domain-Model
Collaboration [52.57055162778548]
Legal Judgment Prediction (LJP) has become an increasingly crucial task in Legal AI.
Precedents are the previous legal cases with similar facts, which are the basis for the judgment of the subsequent case in national legal systems.
Recent advances in deep learning have enabled a variety of techniques to be used to solve the LJP task.
arXiv Detail & Related papers (2023-10-13T16:47:20Z) - Exploiting Contrastive Learning and Numerical Evidence for Confusing
Legal Judgment Prediction [46.71918729837462]
Given the fact description text of a legal case, legal judgment prediction aims to predict the case's charge, law article and penalty term.
Previous studies fail to distinguish different classification errors with a standard cross-entropy classification loss.
We propose a moco-based supervised contrastive learning to learn distinguishable representations.
We further enhance the representation of the fact description with extracted crime amounts which are encoded by a pre-trained numeracy model.
arXiv Detail & Related papers (2022-11-15T15:53:56Z) - Do Charge Prediction Models Learn Legal Theory? [59.74220430434435]
We argue that trustworthy charge prediction models should take legal theories into consideration.
We propose three principles for trustworthy models should follow in this task, which are sensitive, selective, and presumption of innocence.
Our findings indicate that, while existing charge prediction models meet the selective principle on a benchmark dataset, most of them are still not sensitive enough and do not satisfy the presumption of innocence.
arXiv Detail & Related papers (2022-10-31T07:32:12Z) - Lawformer: A Pre-trained Language Model for Chinese Legal Long Documents [56.40163943394202]
We release the Longformer-based pre-trained language model, named as Lawformer, for Chinese legal long documents understanding.
We evaluate Lawformer on a variety of LegalAI tasks, including judgment prediction, similar case retrieval, legal reading comprehension, and legal question answering.
arXiv Detail & Related papers (2021-05-09T09:39:25Z) - A Dataset for Statutory Reasoning in Tax Law Entailment and Question
Answering [37.66486350122862]
This paper investigates the performance of natural language understanding approaches on statutory reasoning.
We introduce a dataset, together with a legal-domain text corpus.
We contrast this with a hand-constructed Prolog-based system, designed to fully solve the task.
arXiv Detail & Related papers (2020-05-11T16:54:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.