ELLA: Empowering LLMs for Interpretable, Accurate and Informative Legal   Advice
        - URL: http://arxiv.org/abs/2408.07137v1
 - Date: Tue, 13 Aug 2024 18:12:00 GMT
 - Title: ELLA: Empowering LLMs for Interpretable, Accurate and Informative Legal   Advice
 - Authors: Yutong Hu, Kangcheng Luo, Yansong Feng, 
 - Abstract summary: ELLA is a tool for bf Empowering bf LLMs for interpretable, accurate, and informative bf Legal bf Advice.
 - Score: 26.743016561520506
 - License: http://creativecommons.org/licenses/by/4.0/
 - Abstract:   Despite remarkable performance in legal consultation exhibited by legal Large Language Models(LLMs) combined with legal article retrieval components, there are still cases when the advice given is incorrect or baseless. To alleviate these problems, we propose {\bf ELLA}, a tool for {\bf E}mpowering {\bf L}LMs for interpretable, accurate, and informative {\bf L}egal {\bf A}dvice. ELLA visually presents the correlation between legal articles and LLM's response by calculating their similarities, providing users with an intuitive legal basis for the responses. Besides, based on the users' queries, ELLA retrieves relevant legal articles and displays them to users. Users can interactively select legal articles for LLM to generate more accurate responses. ELLA also retrieves relevant legal cases for user reference. Our user study shows that presenting the legal basis for the response helps users understand better. The accuracy of LLM's responses also improves when users intervene in selecting legal articles for LLM. Providing relevant legal cases also aids individuals in obtaining comprehensive information. 
 
       
      
        Related papers
        - CitaLaw: Enhancing LLM with Citations in Legal Domain [5.249003454314636]
We propose CitaLaw, the first benchmark designed to evaluate LLMs' ability to produce legally sound responses with appropriate citations.
CitaLaw features a diverse set of legal questions for both laypersons and practitioners, paired with a comprehensive corpus of law articles and precedent cases as a reference pool.
arXiv  Detail & Related papers  (2024-12-19T06:14:20Z) - LawLuo: A Chinese Law Firm Co-run by LLM Agents [1.9857357818932064]
Large Language Models (LLMs) deliver legal consultation services to users without a legal background.
Existing Chinese legal LLMs limit interaction to a single model-user dialogue.
We propose a novel legal dialogue framework that leverages the collaborative capabilities of multiple LLM agents, termed LawLuo.
arXiv  Detail & Related papers  (2024-07-23T07:40:41Z) - I Need Help! Evaluating LLM's Ability to Ask for Users' Support: A Case   Study on Text-to-SQL Generation [60.00337758147594]
This study explores the proactive ability of LLMs to seek user support.
We propose metrics to evaluate the trade-off between performance improvements and user burden.
Our experiments show that without external feedback, many LLMs struggle to recognize their need for user support.
arXiv  Detail & Related papers  (2024-07-20T06:12:29Z) - InternLM-Law: An Open Source Chinese Legal Large Language Model [72.2589401309848]
InternLM-Law is a specialized LLM tailored for addressing diverse legal queries related to Chinese laws.
We meticulously construct a dataset in the Chinese legal domain, encompassing over 1 million queries.
 InternLM-Law achieves the highest average performance on LawBench, outperforming state-of-the-art models, including GPT-4, on 13 out of 20 subtasks.
arXiv  Detail & Related papers  (2024-06-21T06:19:03Z) - Knowledge-Infused Legal Wisdom: Navigating LLM Consultation through the   Lens of Diagnostics and Positive-Unlabeled Reinforcement Learning [19.55121050697779]
We propose the Diagnostic Legal Large Language Model (D3LM), which utilizes adaptive lawyer-like diagnostic questions to collect additional case information.
D3LM incorporates an innovative graph-based Positive-Unlabeled Reinforcement Learning (PURL) algorithm, enabling the generation of critical questions.
Our research also introduces a new English-language CVG dataset based on the US case law database.
arXiv  Detail & Related papers  (2024-06-05T19:47:35Z) - (A)I Am Not a Lawyer, But...: Engaging Legal Experts towards Responsible   LLM Policies for Legal Advice [8.48013392781081]
Large language models (LLMs) are increasingly capable of providing users with advice in a wide range of professional domains, including legal advice.
We conducted workshops with 20 legal experts using methods inspired by case-based reasoning.
Our findings reveal novel legal considerations, such as unauthorized practice of law, confidentiality, and liability for inaccurate advice.
arXiv  Detail & Related papers  (2024-02-02T19:35:34Z) - LLatrieval: LLM-Verified Retrieval for Verifiable Generation [67.93134176912477]
Verifiable generation aims to let the large language model (LLM) generate text with supporting documents.
We propose LLatrieval (Large Language Model Verified Retrieval), where the LLM updates the retrieval result until it verifies that the retrieved documents can sufficiently support answering the question.
Experiments show that LLatrieval significantly outperforms extensive baselines and achieves state-of-the-art results.
arXiv  Detail & Related papers  (2023-11-14T01:38:02Z) - A Comprehensive Evaluation of Large Language Models on Legal Judgment
  Prediction [60.70089334782383]
Large language models (LLMs) have demonstrated great potential for domain-specific applications.
Recent disputes over GPT-4's law evaluation raise questions concerning their performance in real-world legal tasks.
We design practical baseline solutions based on LLMs and test on the task of legal judgment prediction.
arXiv  Detail & Related papers  (2023-10-18T07:38:04Z) - LAiW: A Chinese Legal Large Language Models Benchmark [17.66376880475554]
General and legal domain LLMs have demonstrated strong performance in various tasks of LegalAI.
We are the first to build the Chinese legal LLMs benchmark LAiW, based on the logic of legal practice.
arXiv  Detail & Related papers  (2023-10-09T11:19:55Z) - Investigating the Factual Knowledge Boundary of Large Language Models   with Retrieval Augmentation [109.8527403904657]
We show that large language models (LLMs) possess unwavering confidence in their knowledge and cannot handle the conflict between internal and external knowledge well.
Retrieval augmentation proves to be an effective approach in enhancing LLMs' awareness of knowledge boundaries.
We propose a simple method to dynamically utilize supporting documents with our judgement strategy.
arXiv  Detail & Related papers  (2023-07-20T16:46:10Z) - Check Your Facts and Try Again: Improving Large Language Models with
  External Knowledge and Automated Feedback [127.75419038610455]
Large language models (LLMs) are able to generate human-like, fluent responses for many downstream tasks.
This paper proposes a LLM-Augmenter system, which augments a black-box LLM with a set of plug-and-play modules.
arXiv  Detail & Related papers  (2023-02-24T18:48:43Z) 
        This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.