DISC-LawLLM: Fine-tuning Large Language Models for Intelligent Legal
Services
- URL: http://arxiv.org/abs/2309.11325v2
- Date: Sat, 23 Sep 2023 18:36:21 GMT
- Title: DISC-LawLLM: Fine-tuning Large Language Models for Intelligent Legal
Services
- Authors: Shengbin Yue, Wei Chen, Siyuan Wang, Bingxuan Li, Chenchen Shen,
Shujun Liu, Yuxuan Zhou, Yao Xiao, Song Yun, Xuanjing Huang, Zhongyu Wei
- Abstract summary: We propose DISC-LawLLM, an intelligent legal system utilizing large language models (LLMs) to provide a wide range of legal services.
We adopt legal syllogism prompting strategies to construct supervised fine-tuning datasets in the Chinese Judicial domain.
A comprehensive legal benchmark, DISC-Law-Eval, is presented to evaluate intelligent legal systems from both objective and subjective dimensions.
- Score: 41.92132088988707
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose DISC-LawLLM, an intelligent legal system utilizing large language
models (LLMs) to provide a wide range of legal services. We adopt legal
syllogism prompting strategies to construct supervised fine-tuning datasets in
the Chinese Judicial domain and fine-tune LLMs with legal reasoning capability.
We augment LLMs with a retrieval module to enhance models' ability to access
and utilize external legal knowledge. A comprehensive legal benchmark,
DISC-Law-Eval, is presented to evaluate intelligent legal systems from both
objective and subjective dimensions. Quantitative and qualitative results on
DISC-Law-Eval demonstrate the effectiveness of our system in serving various
users across diverse legal scenarios. The detailed resources are available at
https://github.com/FudanDISC/DISC-LawLLM.
Related papers
- Legal Evalutions and Challenges of Large Language Models [42.51294752406578]
We use the OPENAI o1 model as a case study to evaluate the performance of large models in applying legal provisions.
We compare current state-of-the-art LLMs, including open-source, closed-source, and legal-specific models trained specifically for the legal domain.
arXiv Detail & Related papers (2024-11-15T12:23:12Z) - DeliLaw: A Chinese Legal Counselling System Based on a Large Language Model [16.63238943983347]
DeliLaw is a Chinese legal counselling system based on a large language model.
Users can consult professional legal questions, search for legal articles and relevant judgement cases, etc. on the DeliLaw system in a dialogue mode.
arXiv Detail & Related papers (2024-08-01T07:54:52Z) - LeKUBE: A Legal Knowledge Update BEnchmark [30.62956609611883]
How to update the legal knowledge of Large Language Models (LLMs) has become an important research problem in practice.
Existing benchmarks for evaluating knowledge update methods are mostly designed for the open domain.
We introduce the Legal Knowledge Update BEnchmark, i.e. LeKUBE, which evaluates knowledge update methods for legal LLMs across five dimensions.
arXiv Detail & Related papers (2024-07-19T10:40:10Z) - InternLM-Law: An Open Source Chinese Legal Large Language Model [72.2589401309848]
InternLM-Law is a specialized LLM tailored for addressing diverse legal queries related to Chinese laws.
We meticulously construct a dataset in the Chinese legal domain, encompassing over 1 million queries.
InternLM-Law achieves the highest average performance on LawBench, outperforming state-of-the-art models, including GPT-4, on 13 out of 20 subtasks.
arXiv Detail & Related papers (2024-06-21T06:19:03Z) - A Comprehensive Evaluation of Large Language Models on Legal Judgment
Prediction [60.70089334782383]
Large language models (LLMs) have demonstrated great potential for domain-specific applications.
Recent disputes over GPT-4's law evaluation raise questions concerning their performance in real-world legal tasks.
We design practical baseline solutions based on LLMs and test on the task of legal judgment prediction.
arXiv Detail & Related papers (2023-10-18T07:38:04Z) - Precedent-Enhanced Legal Judgment Prediction with LLM and Domain-Model
Collaboration [52.57055162778548]
Legal Judgment Prediction (LJP) has become an increasingly crucial task in Legal AI.
Precedents are the previous legal cases with similar facts, which are the basis for the judgment of the subsequent case in national legal systems.
Recent advances in deep learning have enabled a variety of techniques to be used to solve the LJP task.
arXiv Detail & Related papers (2023-10-13T16:47:20Z) - Large Language Models as Tax Attorneys: A Case Study in Legal
Capabilities Emergence [5.07013500385659]
This paper explores Large Language Models' (LLMs) capabilities in applying tax law.
Our experiments demonstrate emerging legal understanding capabilities, with improved performance in each subsequent OpenAI model release.
Findings indicate that LLMs, particularly when combined with prompting enhancements and the correct legal texts, can perform at high levels of accuracy but not yet at expert tax lawyer levels.
arXiv Detail & Related papers (2023-06-12T12:40:48Z) - SAILER: Structure-aware Pre-trained Language Model for Legal Case
Retrieval [75.05173891207214]
Legal case retrieval plays a core role in the intelligent legal system.
Most existing language models have difficulty understanding the long-distance dependencies between different structures.
We propose a new Structure-Aware pre-traIned language model for LEgal case Retrieval.
arXiv Detail & Related papers (2023-04-22T10:47:01Z) - A Short Survey of Viewing Large Language Models in Legal Aspect [0.0]
Large language models (LLMs) have transformed many fields, including natural language processing, computer vision, and reinforcement learning.
The integration of LLMs into the legal field has also raised several legal problems, including privacy concerns, bias, and explainability.
arXiv Detail & Related papers (2023-03-16T08:01:22Z) - Lawformer: A Pre-trained Language Model for Chinese Legal Long Documents [56.40163943394202]
We release the Longformer-based pre-trained language model, named as Lawformer, for Chinese legal long documents understanding.
We evaluate Lawformer on a variety of LegalAI tasks, including judgment prediction, similar case retrieval, legal reading comprehension, and legal question answering.
arXiv Detail & Related papers (2021-05-09T09:39:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.