Enhancing Pre-Trained Language Models with Sentence Position Embeddings
for Rhetorical Roles Recognition in Legal Opinions
- URL: http://arxiv.org/abs/2310.05276v1
- Date: Sun, 8 Oct 2023 20:33:55 GMT
- Title: Enhancing Pre-Trained Language Models with Sentence Position Embeddings
for Rhetorical Roles Recognition in Legal Opinions
- Authors: Anas Belfathi, Nicolas Hernandez and Laura Monceaux
- Abstract summary: The size of legal opinions continues to grow, making it increasingly challenging to develop a model that can accurately predict the rhetorical roles of legal opinions.
We propose a novel model architecture for automatically predicting rhetorical roles using pre-trained language models (PLMs) enhanced with knowledge of sentence position information.
Based on an annotated corpus from the LegalEval@SemEval2023 competition, we demonstrate that our approach requires fewer parameters, resulting in lower computational costs.
- Score: 0.16385815610837165
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The legal domain is a vast and complex field that involves a considerable
amount of text analysis, including laws, legal arguments, and legal opinions.
Legal practitioners must analyze these texts to understand legal cases,
research legal precedents, and prepare legal documents. The size of legal
opinions continues to grow, making it increasingly challenging to develop a
model that can accurately predict the rhetorical roles of legal opinions given
their complexity and diversity. In this research paper, we propose a novel
model architecture for automatically predicting rhetorical roles using
pre-trained language models (PLMs) enhanced with knowledge of sentence position
information within a document. Based on an annotated corpus from the
LegalEval@SemEval2023 competition, we demonstrate that our approach requires
fewer parameters, resulting in lower computational costs when compared to
complex architectures employing a hierarchical model in a global-context, yet
it achieves great performance. Moreover, we show that adding more attention to
a hierarchical model based only on BERT in the local-context, along with
incorporating sentence position information, enhances the results.
Related papers
- LawLLM: Law Large Language Model for the US Legal System [43.13850456765944]
We introduce the Law Large Language Model (LawLLM), a multi-task model specifically designed for the US legal domain.
LawLLM excels at Similar Case Retrieval (SCR), Precedent Case Recommendation (PCR), and Legal Judgment Prediction (LJP)
We propose customized data preprocessing techniques for each task that transform raw legal data into a trainable format.
arXiv Detail & Related papers (2024-07-27T21:51:30Z) - InternLM-Law: An Open Source Chinese Legal Large Language Model [72.2589401309848]
InternLM-Law is a specialized LLM tailored for addressing diverse legal queries related to Chinese laws.
We meticulously construct a dataset in the Chinese legal domain, encompassing over 1 million queries.
InternLM-Law achieves the highest average performance on LawBench, outperforming state-of-the-art models, including GPT-4, on 13 out of 20 subtasks.
arXiv Detail & Related papers (2024-06-21T06:19:03Z) - Empowering Prior to Court Legal Analysis: A Transparent and Accessible Dataset for Defensive Statement Classification and Interpretation [5.646219481667151]
This paper introduces a novel dataset tailored for classification of statements made during police interviews, prior to court proceedings.
We introduce a fine-tuned DistilBERT model that achieves state-of-the-art performance in distinguishing truthful from deceptive statements.
We also present an XAI interface that empowers both legal professionals and non-specialists to interact with and benefit from our system.
arXiv Detail & Related papers (2024-05-17T11:22:27Z) - DELTA: Pre-train a Discriminative Encoder for Legal Case Retrieval via Structural Word Alignment [55.91429725404988]
We introduce DELTA, a discriminative model designed for legal case retrieval.
We leverage shallow decoders to create information bottlenecks, aiming to enhance the representation ability.
Our approach can outperform existing state-of-the-art methods in legal case retrieval.
arXiv Detail & Related papers (2024-03-27T10:40:14Z) - Leveraging Large Language Models for Relevance Judgments in Legal Case Retrieval [18.058942674792604]
We propose a novel few-shot workflow tailored to the relevant judgment of legal cases.
By comparing the relevance judgments of LLMs and human experts, we empirically show that we can obtain reliable relevance judgments.
arXiv Detail & Related papers (2024-03-27T09:46:56Z) - Precedent-Enhanced Legal Judgment Prediction with LLM and Domain-Model
Collaboration [52.57055162778548]
Legal Judgment Prediction (LJP) has become an increasingly crucial task in Legal AI.
Precedents are the previous legal cases with similar facts, which are the basis for the judgment of the subsequent case in national legal systems.
Recent advances in deep learning have enabled a variety of techniques to be used to solve the LJP task.
arXiv Detail & Related papers (2023-10-13T16:47:20Z) - SAILER: Structure-aware Pre-trained Language Model for Legal Case
Retrieval [75.05173891207214]
Legal case retrieval plays a core role in the intelligent legal system.
Most existing language models have difficulty understanding the long-distance dependencies between different structures.
We propose a new Structure-Aware pre-traIned language model for LEgal case Retrieval.
arXiv Detail & Related papers (2023-04-22T10:47:01Z) - Corpus for Automatic Structuring of Legal Documents [1.8025738207124173]
We introduce a corpus of legal judgment documents in English that are segmented into topical and coherent parts.
We develop baseline models for automatically predicting rhetorical roles in a legal document based on the annotated corpus.
We show the application of rhetorical roles to improve performance on the tasks of summarization and legal judgment prediction.
arXiv Detail & Related papers (2022-01-31T11:12:44Z) - Semantic Segmentation of Legal Documents via Rhetorical Roles [3.285073688021526]
This paper proposes a Rhetorical Roles (RR) system for segmenting a legal document into semantically coherent units.
We develop a multitask learning-based deep learning model with document rhetorical role label shift as an auxiliary task for segmenting a legal document.
arXiv Detail & Related papers (2021-12-03T10:49:19Z) - Lawformer: A Pre-trained Language Model for Chinese Legal Long Documents [56.40163943394202]
We release the Longformer-based pre-trained language model, named as Lawformer, for Chinese legal long documents understanding.
We evaluate Lawformer on a variety of LegalAI tasks, including judgment prediction, similar case retrieval, legal reading comprehension, and legal question answering.
arXiv Detail & Related papers (2021-05-09T09:39:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.