Unlocking Practical Applications in Legal Domain: Evaluation of GPT for
Zero-Shot Semantic Annotation of Legal Texts
- URL: http://arxiv.org/abs/2305.04417v1
- Date: Mon, 8 May 2023 01:55:53 GMT
- Title: Unlocking Practical Applications in Legal Domain: Evaluation of GPT for
Zero-Shot Semantic Annotation of Legal Texts
- Authors: Jaromir Savelka
- Abstract summary: We evaluate the capability of a state-of-the-art generative pre-trained transformer (GPT) model to perform semantic annotation of short text snippets.
We found that the GPT model performs surprisingly well in zero-shot settings on diverse types of documents.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We evaluated the capability of a state-of-the-art generative pre-trained
transformer (GPT) model to perform semantic annotation of short text snippets
(one to few sentences) coming from legal documents of various types.
Discussions of potential uses (e.g., document drafting, summarization) of this
emerging technology in legal domain have intensified, but to date there has not
been a rigorous analysis of these large language models' (LLM) capacity in
sentence-level semantic annotation of legal texts in zero-shot learning
settings. Yet, this particular type of use could unlock many practical
applications (e.g., in contract review) and research opportunities (e.g., in
empirical legal studies). We fill the gap with this study. We examined if and
how successfully the model can semantically annotate small batches of short
text snippets (10-50) based exclusively on concise definitions of the semantic
types. We found that the GPT model performs surprisingly well in zero-shot
settings on diverse types of documents (F1=.73 on a task involving court
opinions, .86 for contracts, and .54 for statutes and regulations). These
findings can be leveraged by legal scholars and practicing lawyers alike to
guide their decisions in integrating LLMs in wide range of workflows involving
semantic annotation of legal texts.
Related papers
- Legal Evalutions and Challenges of Large Language Models [42.51294752406578]
We use the OPENAI o1 model as a case study to evaluate the performance of large models in applying legal provisions.
We compare current state-of-the-art LLMs, including open-source, closed-source, and legal-specific models trained specifically for the legal domain.
arXiv Detail & Related papers (2024-11-15T12:23:12Z) - Topic Modelling Case Law Using a Large Language Model and a New Taxonomy for UK Law: AI Insights into Summary Judgment [0.0]
This paper develops and applies a novel taxonomy for topic modelling summary judgment cases in the United Kingdom.
Using a curated dataset of summary judgment cases, we use the Large Language Model Claude 3 Opus to explore functional topics and trends.
We find that Claude 3 Opus correctly classified the topic with an accuracy of 87.10%.
arXiv Detail & Related papers (2024-05-21T16:30:25Z) - LegalPro-BERT: Classification of Legal Provisions by fine-tuning BERT Large Language Model [0.0]
Contract analysis requires the identification and classification of key provisions and paragraphs within an agreement.
LegalPro-BERT is a BERT transformer architecture model that we fine- tune to efficiently handle classification task for legal provisions.
arXiv Detail & Related papers (2024-04-15T19:08:48Z) - DELTA: Pre-train a Discriminative Encoder for Legal Case Retrieval via Structural Word Alignment [55.91429725404988]
We introduce DELTA, a discriminative model designed for legal case retrieval.
We leverage shallow decoders to create information bottlenecks, aiming to enhance the representation ability.
Our approach can outperform existing state-of-the-art methods in legal case retrieval.
arXiv Detail & Related papers (2024-03-27T10:40:14Z) - MUSER: A Multi-View Similar Case Retrieval Dataset [65.36779942237357]
Similar case retrieval (SCR) is a representative legal AI application that plays a pivotal role in promoting judicial fairness.
Existing SCR datasets only focus on the fact description section when judging the similarity between cases.
We present M, a similar case retrieval dataset based on multi-view similarity measurement and comprehensive legal element with sentence-level legal element annotations.
arXiv Detail & Related papers (2023-10-24T08:17:11Z) - Precedent-Enhanced Legal Judgment Prediction with LLM and Domain-Model
Collaboration [52.57055162778548]
Legal Judgment Prediction (LJP) has become an increasingly crucial task in Legal AI.
Precedents are the previous legal cases with similar facts, which are the basis for the judgment of the subsequent case in national legal systems.
Recent advances in deep learning have enabled a variety of techniques to be used to solve the LJP task.
arXiv Detail & Related papers (2023-10-13T16:47:20Z) - Enhancing Pre-Trained Language Models with Sentence Position Embeddings
for Rhetorical Roles Recognition in Legal Opinions [0.16385815610837165]
The size of legal opinions continues to grow, making it increasingly challenging to develop a model that can accurately predict the rhetorical roles of legal opinions.
We propose a novel model architecture for automatically predicting rhetorical roles using pre-trained language models (PLMs) enhanced with knowledge of sentence position information.
Based on an annotated corpus from the LegalEval@SemEval2023 competition, we demonstrate that our approach requires fewer parameters, resulting in lower computational costs.
arXiv Detail & Related papers (2023-10-08T20:33:55Z) - CiteCaseLAW: Citation Worthiness Detection in Caselaw for Legal
Assistive Writing [44.75251805925605]
We introduce a labeled dataset of 178M sentences for citation-worthiness detection in the legal domain from the Caselaw Access Project (CAP)
The performance of various deep learning models was examined on this novel dataset.
The domain-specific pre-trained model tends to outperform other models, with an 88% F1-score for the citation-worthiness detection task.
arXiv Detail & Related papers (2023-05-03T04:20:56Z) - SAILER: Structure-aware Pre-trained Language Model for Legal Case
Retrieval [75.05173891207214]
Legal case retrieval plays a core role in the intelligent legal system.
Most existing language models have difficulty understanding the long-distance dependencies between different structures.
We propose a new Structure-Aware pre-traIned language model for LEgal case Retrieval.
arXiv Detail & Related papers (2023-04-22T10:47:01Z) - LexGLUE: A Benchmark Dataset for Legal Language Understanding in English [15.026117429782996]
We introduce the Legal General Language Evaluation (LexGLUE) benchmark, a collection of datasets for evaluating model performance across a diverse set of legal NLU tasks.
We also provide an evaluation and analysis of several generic and legal-oriented models demonstrating that the latter consistently offer performance improvements across multiple tasks.
arXiv Detail & Related papers (2021-10-03T10:50:51Z) - Lawformer: A Pre-trained Language Model for Chinese Legal Long Documents [56.40163943394202]
We release the Longformer-based pre-trained language model, named as Lawformer, for Chinese legal long documents understanding.
We evaluate Lawformer on a variety of LegalAI tasks, including judgment prediction, similar case retrieval, legal reading comprehension, and legal question answering.
arXiv Detail & Related papers (2021-05-09T09:39:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.