From Text to Structure: Using Large Language Models to Support the
Development of Legal Expert Systems
- URL: http://arxiv.org/abs/2311.04911v1
- Date: Wed, 1 Nov 2023 18:31:02 GMT
- Title: From Text to Structure: Using Large Language Models to Support the
Development of Legal Expert Systems
- Authors: Samyar Janatian, Hannes Westermann, Jinzhe Tan, Jaromir Savelka, Karim
Benyekhlef
- Abstract summary: Rule-based expert systems focused on legislation can support laypeople in understanding how legislation applies to them and provide them with helpful context and information.
Here, we investigate what degree large language models (LLMs), such as GPT-4, are able to automatically extract structured representations from legislation.
We use LLMs to create pathways from legislation, according to the JusticeBot methodology for legal decision support systems, evaluate the pathways and compare them to manually created pathways.
- Score: 0.6249768559720122
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Encoding legislative text in a formal representation is an important
prerequisite to different tasks in the field of AI & Law. For example,
rule-based expert systems focused on legislation can support laypeople in
understanding how legislation applies to them and provide them with helpful
context and information. However, the process of analyzing legislation and
other sources to encode it in the desired formal representation can be
time-consuming and represents a bottleneck in the development of such systems.
Here, we investigate to what degree large language models (LLMs), such as
GPT-4, are able to automatically extract structured representations from
legislation. We use LLMs to create pathways from legislation, according to the
JusticeBot methodology for legal decision support systems, evaluate the
pathways and compare them to manually created pathways. The results are
promising, with 60% of generated pathways being rated as equivalent or better
than manually created ones in a blind comparison. The approach suggests a
promising path to leverage the capabilities of LLMs to ease the costly
development of systems based on symbolic approaches that are transparent and
explainable.
Related papers
- KRAG Framework for Enhancing LLMs in the Legal Domain [0.48451657575793666]
This paper introduces Knowledge Representation Augmented Generation (KRAG)
KRAG is a framework designed to enhance the capabilities of Large Language Models (LLMs) within domain-specific applications.
We present Soft PROLEG, an implementation model under KRAG, which uses inference graphs to aid LLMs in delivering structured legal reasoning.
arXiv Detail & Related papers (2024-10-10T02:48:06Z) - Leveraging Knowledge Graphs and LLMs to Support and Monitor Legislative Systems [0.0]
This work investigates how Legislative Knowledge Graphs and LLMs can synergize and support legislative processes.
To this aim, we develop Legis AI Platform, an interactive platform focused on Italian legislation that enhances the possibility of conducting legislative analysis.
arXiv Detail & Related papers (2024-09-20T06:21:03Z) - Using Large Language Models for the Interpretation of Building Regulations [7.013802453969655]
Large language models (LLMs) can generate logically coherent text and source code responding to user prompts.
This paper evaluates the performance of LLMs in translating building regulations into LegalRuleML in a few-shot learning setup.
arXiv Detail & Related papers (2024-07-26T08:30:47Z) - Large Language Models are Interpretable Learners [53.56735770834617]
In this paper, we show a combination of Large Language Models (LLMs) and symbolic programs can bridge the gap between expressiveness and interpretability.
The pretrained LLM with natural language prompts provides a massive set of interpretable modules that can transform raw input into natural language concepts.
As the knowledge learned by LSP is a combination of natural language descriptions and symbolic rules, it is easily transferable to humans (interpretable) and other LLMs.
arXiv Detail & Related papers (2024-06-25T02:18:15Z) - DELTA: Pre-train a Discriminative Encoder for Legal Case Retrieval via Structural Word Alignment [55.91429725404988]
We introduce DELTA, a discriminative model designed for legal case retrieval.
We leverage shallow decoders to create information bottlenecks, aiming to enhance the representation ability.
Our approach can outperform existing state-of-the-art methods in legal case retrieval.
arXiv Detail & Related papers (2024-03-27T10:40:14Z) - Generative Context-aware Fine-tuning of Self-supervised Speech Models [54.389711404209415]
We study the use of generative large language models (LLM) generated context information.
We propose an approach to distill the generated information during fine-tuning of self-supervised speech models.
We evaluate the proposed approach using the SLUE and Libri-light benchmarks for several downstream tasks: automatic speech recognition, named entity recognition, and sentiment analysis.
arXiv Detail & Related papers (2023-12-15T15:46:02Z) - Large Language Models and Explainable Law: a Hybrid Methodology [44.99833362998488]
The paper advocates for LLMs to enhance the accessibility, usage and explainability of rule-based legal systems.
A methodology is developed to explore the potential use of LLMs for translating the explanations produced by rule-based systems.
arXiv Detail & Related papers (2023-11-20T14:47:20Z) - Precedent-Enhanced Legal Judgment Prediction with LLM and Domain-Model
Collaboration [52.57055162778548]
Legal Judgment Prediction (LJP) has become an increasingly crucial task in Legal AI.
Precedents are the previous legal cases with similar facts, which are the basis for the judgment of the subsequent case in national legal systems.
Recent advances in deep learning have enabled a variety of techniques to be used to solve the LJP task.
arXiv Detail & Related papers (2023-10-13T16:47:20Z) - Large Language Models as Tax Attorneys: A Case Study in Legal
Capabilities Emergence [5.07013500385659]
This paper explores Large Language Models' (LLMs) capabilities in applying tax law.
Our experiments demonstrate emerging legal understanding capabilities, with improved performance in each subsequent OpenAI model release.
Findings indicate that LLMs, particularly when combined with prompting enhancements and the correct legal texts, can perform at high levels of accuracy but not yet at expert tax lawyer levels.
arXiv Detail & Related papers (2023-06-12T12:40:48Z) - Lawformer: A Pre-trained Language Model for Chinese Legal Long Documents [56.40163943394202]
We release the Longformer-based pre-trained language model, named as Lawformer, for Chinese legal long documents understanding.
We evaluate Lawformer on a variety of LegalAI tasks, including judgment prediction, similar case retrieval, legal reading comprehension, and legal question answering.
arXiv Detail & Related papers (2021-05-09T09:39:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.