Large Language Models and Explainable Law: a Hybrid Methodology
- URL: http://arxiv.org/abs/2311.11811v1
- Date: Mon, 20 Nov 2023 14:47:20 GMT
- Title: Large Language Models and Explainable Law: a Hybrid Methodology
- Authors: Marco Billi, Alessandro Parenti, Giuseppe Pisano, Marco Sanchi
- Abstract summary: The paper advocates for LLMs to enhance the accessibility, usage and explainability of rule-based legal systems.
A methodology is developed to explore the potential use of LLMs for translating the explanations produced by rule-based systems.
- Score: 44.99833362998488
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The paper advocates for LLMs to enhance the accessibility, usage and
explainability of rule-based legal systems, contributing to a democratic and
stakeholder-oriented view of legal technology. A methodology is developed to
explore the potential use of LLMs for translating the explanations produced by
rule-based systems, from high-level programming languages to natural language,
allowing all users a fast, clear, and accessible interaction with such
technologies. The study continues by building upon these explanations to
empower laypeople with the ability to execute complex juridical tasks on their
own, using a Chain of Prompts for the autonomous legal comparison of different
rule-based inferences, applied to the same factual case.
Related papers
- KRAG Framework for Enhancing LLMs in the Legal Domain [0.48451657575793666]
This paper introduces Knowledge Representation Augmented Generation (KRAG)
KRAG is a framework designed to enhance the capabilities of Large Language Models (LLMs) within domain-specific applications.
We present Soft PROLEG, an implementation model under KRAG, which uses inference graphs to aid LLMs in delivering structured legal reasoning.
arXiv Detail & Related papers (2024-10-10T02:48:06Z) - Can Large Language Models Grasp Legal Theories? Enhance Legal Reasoning with Insights from Multi-Agent Collaboration [27.047809869136458]
Large Language Models (LLMs) could struggle to fully understand legal theories and perform legal reasoning tasks.
We introduce a challenging task (confusing charge prediction) to better evaluate LLMs' understanding of legal theories and reasoning capabilities.
We also propose a novel framework: Multi-Agent framework for improving complex Legal Reasoning capability.
arXiv Detail & Related papers (2024-10-03T14:15:00Z) - Leveraging Knowledge Graphs and LLMs to Support and Monitor Legislative Systems [0.0]
This work investigates how Legislative Knowledge Graphs and LLMs can synergize and support legislative processes.
To this aim, we develop Legis AI Platform, an interactive platform focused on Italian legislation that enhances the possibility of conducting legislative analysis.
arXiv Detail & Related papers (2024-09-20T06:21:03Z) - Large Language Models are Interpretable Learners [53.56735770834617]
In this paper, we show a combination of Large Language Models (LLMs) and symbolic programs can bridge the gap between expressiveness and interpretability.
The pretrained LLM with natural language prompts provides a massive set of interpretable modules that can transform raw input into natural language concepts.
As the knowledge learned by LSP is a combination of natural language descriptions and symbolic rules, it is easily transferable to humans (interpretable) and other LLMs.
arXiv Detail & Related papers (2024-06-25T02:18:15Z) - AutoGuide: Automated Generation and Selection of State-Aware Guidelines for Large Language Model Agents [74.17623527375241]
AutoGuide bridges the knowledge gap in pre-trained LLMs by leveraging implicit knowledge in offline experiences.
We show that our approach outperforms competitive LLM-based baselines by a large margin in sequential decision-making benchmarks.
arXiv Detail & Related papers (2024-03-13T22:06:03Z) - Leveraging Large Language Models for Learning Complex Legal Concepts through Storytelling [43.243889347008455]
We present a novel application of large language models (LLMs) in legal education to help non-experts learn intricate legal concepts through storytelling.
We introduce a new dataset LegalStories, which consists of 294 complex legal doctrines, each accompanied by a story and a set of multiple-choice questions.
We find that LLM-generated stories enhance comprehension of legal concepts and interest in law among non-native speakers compared to only definitions.
arXiv Detail & Related papers (2024-02-26T20:56:06Z) - From Text to Structure: Using Large Language Models to Support the
Development of Legal Expert Systems [0.6249768559720122]
Rule-based expert systems focused on legislation can support laypeople in understanding how legislation applies to them and provide them with helpful context and information.
Here, we investigate what degree large language models (LLMs), such as GPT-4, are able to automatically extract structured representations from legislation.
We use LLMs to create pathways from legislation, according to the JusticeBot methodology for legal decision support systems, evaluate the pathways and compare them to manually created pathways.
arXiv Detail & Related papers (2023-11-01T18:31:02Z) - ChatABL: Abductive Learning via Natural Language Interaction with
ChatGPT [72.83383437501577]
Large language models (LLMs) have recently demonstrated significant potential in mathematical abilities.
LLMs currently have difficulty in bridging perception, language understanding and reasoning capabilities.
This paper presents a novel method for integrating LLMs into the abductive learning framework.
arXiv Detail & Related papers (2023-04-21T16:23:47Z) - LISA: Learning Interpretable Skill Abstractions from Language [85.20587800593293]
We propose a hierarchical imitation learning framework that can learn diverse, interpretable skills from language-conditioned demonstrations.
Our method demonstrates a more natural way to condition on language in sequential decision-making problems.
arXiv Detail & Related papers (2022-02-28T19:43:24Z) - Lawformer: A Pre-trained Language Model for Chinese Legal Long Documents [56.40163943394202]
We release the Longformer-based pre-trained language model, named as Lawformer, for Chinese legal long documents understanding.
We evaluate Lawformer on a variety of LegalAI tasks, including judgment prediction, similar case retrieval, legal reading comprehension, and legal question answering.
arXiv Detail & Related papers (2021-05-09T09:39:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.