Precedent-Enhanced Legal Judgment Prediction with LLM and Domain-Model
Collaboration
- URL: http://arxiv.org/abs/2310.09241v1
- Date: Fri, 13 Oct 2023 16:47:20 GMT
- Title: Precedent-Enhanced Legal Judgment Prediction with LLM and Domain-Model
Collaboration
- Authors: Yiquan Wu, Siying Zhou, Yifei Liu, Weiming Lu, Xiaozhong Liu, Yating
Zhang, Changlong Sun, Fei Wu, Kun Kuang
- Abstract summary: Legal Judgment Prediction (LJP) has become an increasingly crucial task in Legal AI.
Precedents are the previous legal cases with similar facts, which are the basis for the judgment of the subsequent case in national legal systems.
Recent advances in deep learning have enabled a variety of techniques to be used to solve the LJP task.
- Score: 52.57055162778548
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Legal Judgment Prediction (LJP) has become an increasingly crucial task in
Legal AI, i.e., predicting the judgment of the case in terms of case fact
description. Precedents are the previous legal cases with similar facts, which
are the basis for the judgment of the subsequent case in national legal
systems. Thus, it is worthwhile to explore the utilization of precedents in the
LJP. Recent advances in deep learning have enabled a variety of techniques to
be used to solve the LJP task. These can be broken down into two categories:
large language models (LLMs) and domain-specific models. LLMs are capable of
interpreting and generating complex natural language, while domain models are
efficient in learning task-specific information. In this paper, we propose the
precedent-enhanced LJP framework (PLJP), a system that leverages the strength
of both LLM and domain models in the context of precedents. Specifically, the
domain models are designed to provide candidate labels and find the proper
precedents efficiently, and the large models will make the final prediction
with an in-context precedents comprehension. Experiments on the real-world
dataset demonstrate the effectiveness of our PLJP. Moreover, our work shows a
promising direction for LLM and domain-model collaboration that can be
generalized to other vertical domains.
Related papers
- Legal Evalutions and Challenges of Large Language Models [42.51294752406578]
We use the OPENAI o1 model as a case study to evaluate the performance of large models in applying legal provisions.
We compare current state-of-the-art LLMs, including open-source, closed-source, and legal-specific models trained specifically for the legal domain.
arXiv Detail & Related papers (2024-11-15T12:23:12Z) - TransformLLM: Adapting Large Language Models via LLM-Transformed Reading Comprehension Text [5.523385345486362]
We have developed language models specifically designed for legal applications.
Our innovative approach significantly improves capabilities in legal tasks by using Large Language Models (LLMs) to convert raw training data into reading comprehension text.
arXiv Detail & Related papers (2024-10-28T19:32:18Z) - LawLLM: Law Large Language Model for the US Legal System [43.13850456765944]
We introduce the Law Large Language Model (LawLLM), a multi-task model specifically designed for the US legal domain.
LawLLM excels at Similar Case Retrieval (SCR), Precedent Case Recommendation (PCR), and Legal Judgment Prediction (LJP)
We propose customized data preprocessing techniques for each task that transform raw legal data into a trainable format.
arXiv Detail & Related papers (2024-07-27T21:51:30Z) - Enabling Discriminative Reasoning in LLMs for Legal Judgment Prediction [23.046342240176575]
We introduce the Ask-Discriminate-Predict (ADAPT) reasoning framework inspired by human reasoning.
ADAPT involves decomposing case facts, discriminating among potential charges, and predicting the final judgment.
Experiments conducted on two widely-used datasets demonstrate the superior performance of our framework in legal judgment prediction.
arXiv Detail & Related papers (2024-07-02T05:43:15Z) - InternLM-Law: An Open Source Chinese Legal Large Language Model [72.2589401309848]
InternLM-Law is a specialized LLM tailored for addressing diverse legal queries related to Chinese laws.
We meticulously construct a dataset in the Chinese legal domain, encompassing over 1 million queries.
InternLM-Law achieves the highest average performance on LawBench, outperforming state-of-the-art models, including GPT-4, on 13 out of 20 subtasks.
arXiv Detail & Related papers (2024-06-21T06:19:03Z) - BLADE: Enhancing Black-box Large Language Models with Small Domain-Specific Models [56.89958793648104]
Large Language Models (LLMs) are versatile and capable of addressing a diverse range of tasks.
Previous approaches either conduct continuous pre-training with domain-specific data or employ retrieval augmentation to support general LLMs.
We present a novel framework named BLADE, which enhances Black-box LArge language models with small Domain-spEcific models.
arXiv Detail & Related papers (2024-03-27T08:57:21Z) - Modeling Legal Reasoning: LM Annotation at the Edge of Human Agreement [3.537369004801589]
We study the classification of legal reasoning according to jurisprudential philosophy.
We use a novel dataset of historical United States Supreme Court opinions annotated by a team of domain experts.
We find that generative models perform poorly when given instructions equal to the instructions presented to human annotators.
arXiv Detail & Related papers (2023-10-27T19:27:59Z) - A Survey on Legal Judgment Prediction: Datasets, Metrics, Models and
Challenges [73.34944216896837]
Legal judgment prediction (LJP) applies Natural Language Processing (NLP) techniques to predict judgment results based on fact descriptions automatically.
We analyze 31 LJP datasets in 6 languages, present their construction process and define a classification method of LJP.
We show the state-of-art results for 8 representative datasets from different court cases and discuss the open challenges.
arXiv Detail & Related papers (2022-04-11T04:06:28Z) - Lawformer: A Pre-trained Language Model for Chinese Legal Long Documents [56.40163943394202]
We release the Longformer-based pre-trained language model, named as Lawformer, for Chinese legal long documents understanding.
We evaluate Lawformer on a variety of LegalAI tasks, including judgment prediction, similar case retrieval, legal reading comprehension, and legal question answering.
arXiv Detail & Related papers (2021-05-09T09:39:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.