LawLuo: A Multi-Agent Collaborative Framework for Multi-Round Chinese Legal Consultation
- URL: http://arxiv.org/abs/2407.16252v3
- Date: Mon, 16 Dec 2024 05:53:28 GMT
- Title: LawLuo: A Multi-Agent Collaborative Framework for Multi-Round Chinese Legal Consultation
- Authors: Jingyun Sun, Chengxiao Dai, Zhongze Luo, Yangbo Chang, Yang Li,
- Abstract summary: LawLuo is a multi-agent framework for multi-turn Chinese legal consultations.
LawLuo includes four agents: the receptionist agent, which assesses user intent and selects a lawyer agent; the lawyer agent, which interacts with the user; the secretary agent, which organizes conversation records and generates consultation reports.
These agents' interactions mimic the operations of real law firms.
- Score: 1.9857357818932064
- License:
- Abstract: Legal Large Language Models (LLMs) have shown promise in providing legal consultations to non-experts. However, most existing Chinese legal consultation models are based on single-agent systems, which differ from real-world legal consultations, where multiple professionals collaborate to offer more tailored responses. To better simulate real consultations, we propose LawLuo, a multi-agent framework for multi-turn Chinese legal consultations. LawLuo includes four agents: the receptionist agent, which assesses user intent and selects a lawyer agent; the lawyer agent, which interacts with the user; the secretary agent, which organizes conversation records and generates consultation reports; and the boss agent, which evaluates the performance of the lawyer and secretary agents to ensure optimal results. These agents' interactions mimic the operations of real law firms. To train them to follow different legal instructions, we developed distinct fine-tuning datasets. We also introduce a case graph-based RAG to help the lawyer agent address vague user inputs. Experimental results show that LawLuo outperforms baselines in generating more personalized and professional responses, handling ambiguous queries, and following legal instructions in multi-turn conversations. Our full code and constructed datasets will be open-sourced upon paper acceptance.
Related papers
- LegalAgentBench: Evaluating LLM Agents in Legal Domain [53.70993264644004]
LegalAgentBench is a benchmark specifically designed to evaluate LLM Agents in the Chinese legal domain.
LegalAgentBench includes 17 corpora from real-world legal scenarios and provides 37 tools for interacting with external knowledge.
arXiv Detail & Related papers (2024-12-23T04:02:46Z) - AgentCourt: Simulating Court with Adversarial Evolvable Lawyer Agents [26.268925743738855]
We present a simulation system called AgentCourt that simulates the entire courtroom process.
Our core goal is to enable lawyer agents to learn how to argue a case, as well as improving their overall legal skills.
Experiments show that after two lawyer-agents have engaged in a thousand adversarial legal cases in AgentCourt, the evolved lawyer agents exhibit consistent improvement in their ability to handle legal tasks.
arXiv Detail & Related papers (2024-08-15T11:33:20Z) - DeliLaw: A Chinese Legal Counselling System Based on a Large Language Model [16.63238943983347]
DeliLaw is a Chinese legal counselling system based on a large language model.
Users can consult professional legal questions, search for legal articles and relevant judgement cases, etc. on the DeliLaw system in a dialogue mode.
arXiv Detail & Related papers (2024-08-01T07:54:52Z) - InternLM-Law: An Open Source Chinese Legal Large Language Model [72.2589401309848]
InternLM-Law is a specialized LLM tailored for addressing diverse legal queries related to Chinese laws.
We meticulously construct a dataset in the Chinese legal domain, encompassing over 1 million queries.
InternLM-Law achieves the highest average performance on LawBench, outperforming state-of-the-art models, including GPT-4, on 13 out of 20 subtasks.
arXiv Detail & Related papers (2024-06-21T06:19:03Z) - (A)I Am Not a Lawyer, But...: Engaging Legal Experts towards Responsible LLM Policies for Legal Advice [8.48013392781081]
Large language models (LLMs) are increasingly capable of providing users with advice in a wide range of professional domains, including legal advice.
We conducted workshops with 20 legal experts using methods inspired by case-based reasoning.
Our findings reveal novel legal considerations, such as unauthorized practice of law, confidentiality, and liability for inaccurate advice.
arXiv Detail & Related papers (2024-02-02T19:35:34Z) - A Comprehensive Evaluation of Large Language Models on Legal Judgment
Prediction [60.70089334782383]
Large language models (LLMs) have demonstrated great potential for domain-specific applications.
Recent disputes over GPT-4's law evaluation raise questions concerning their performance in real-world legal tasks.
We design practical baseline solutions based on LLMs and test on the task of legal judgment prediction.
arXiv Detail & Related papers (2023-10-18T07:38:04Z) - LegalBench: A Collaboratively Built Benchmark for Measuring Legal
Reasoning in Large Language Models [15.98468948605927]
LegalBench is a benchmark consisting of 162 tasks covering six different types of legal reasoning.
This paper describes LegalBench, presents an empirical evaluation of 20 open-source and commercial LLMs, and illustrates the types of research explorations LegalBench enables.
arXiv Detail & Related papers (2023-08-20T22:08:03Z) - Chatlaw: A Multi-Agent Collaborative Legal Assistant with Knowledge Graph Enhanced Mixture-of-Experts Large Language Model [30.30848216845138]
Chatlaw is an innovative legal assistant utilizing a Mixture-of-Experts (MoE) model and a multi-agent system.
By integrating knowledge graphs with artificial screening, we construct a high-quality legal dataset to train the MoE model.
Our MoE model outperforms GPT-4 in the Lawbench and Unified Exam Qualification for Legal Professionals by 7.73% in accuracy and 11 points, respectively.
arXiv Detail & Related papers (2023-06-28T10:48:34Z) - SAILER: Structure-aware Pre-trained Language Model for Legal Case
Retrieval [75.05173891207214]
Legal case retrieval plays a core role in the intelligent legal system.
Most existing language models have difficulty understanding the long-distance dependencies between different structures.
We propose a new Structure-Aware pre-traIned language model for LEgal case Retrieval.
arXiv Detail & Related papers (2023-04-22T10:47:01Z) - Lawformer: A Pre-trained Language Model for Chinese Legal Long Documents [56.40163943394202]
We release the Longformer-based pre-trained language model, named as Lawformer, for Chinese legal long documents understanding.
We evaluate Lawformer on a variety of LegalAI tasks, including judgment prediction, similar case retrieval, legal reading comprehension, and legal question answering.
arXiv Detail & Related papers (2021-05-09T09:39:25Z) - How Does NLP Benefit Legal System: A Summary of Legal Artificial
Intelligence [81.04070052740596]
Legal Artificial Intelligence (LegalAI) focuses on applying the technology of artificial intelligence, especially natural language processing, to benefit tasks in the legal domain.
This paper introduces the history, the current state, and the future directions of research in LegalAI.
arXiv Detail & Related papers (2020-04-25T14:45:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.