The Legal Argument Reasoning Task in Civil Procedure
- URL: http://arxiv.org/abs/2211.02950v1
- Date: Sat, 5 Nov 2022 17:41:00 GMT
- Title: The Legal Argument Reasoning Task in Civil Procedure
- Authors: Leonard Bongard, Lena Held, Ivan Habernal
- Abstract summary: We present a new NLP task and dataset from the domain of the U.S. civil procedure.
Each instance of the dataset consists of a general introduction to the case, a particular question, and a possible solution argument.
- Score: 2.079168053329397
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We present a new NLP task and dataset from the domain of the U.S. civil
procedure. Each instance of the dataset consists of a general introduction to
the case, a particular question, and a possible solution argument, accompanied
by a detailed analysis of why the argument applies in that case. Since the
dataset is based on a book aimed at law students, we believe that it represents
a truly complex task for benchmarking modern legal language models. Our
baseline evaluation shows that fine-tuning a legal transformer provides some
advantage over random baseline models, but our analysis reveals that the actual
ability to infer legal arguments remains a challenging open research question.
Related papers
- InternLM-Law: An Open Source Chinese Legal Large Language Model [72.2589401309848]
InternLM-Law is a specialized LLM tailored for addressing diverse legal queries related to Chinese laws.
We meticulously construct a dataset in the Chinese legal domain, encompassing over 1 million queries.
InternLM-Law achieves the highest average performance on LawBench, outperforming state-of-the-art models, including GPT-4, on 13 out of 20 subtasks.
arXiv Detail & Related papers (2024-06-21T06:19:03Z) - Empowering Prior to Court Legal Analysis: A Transparent and Accessible Dataset for Defensive Statement Classification and Interpretation [5.646219481667151]
This paper introduces a novel dataset tailored for classification of statements made during police interviews, prior to court proceedings.
We introduce a fine-tuned DistilBERT model that achieves state-of-the-art performance in distinguishing truthful from deceptive statements.
We also present an XAI interface that empowers both legal professionals and non-specialists to interact with and benefit from our system.
arXiv Detail & Related papers (2024-05-17T11:22:27Z) - DELTA: Pre-train a Discriminative Encoder for Legal Case Retrieval via Structural Word Alignment [55.91429725404988]
We introduce DELTA, a discriminative model designed for legal case retrieval.
We leverage shallow decoders to create information bottlenecks, aiming to enhance the representation ability.
Our approach can outperform existing state-of-the-art methods in legal case retrieval.
arXiv Detail & Related papers (2024-03-27T10:40:14Z) - Exploring the Potential of Large Language Models in Computational Argumentation [54.85665903448207]
Large language models (LLMs) have demonstrated impressive capabilities in understanding context and generating natural language.
This work aims to embark on an assessment of LLMs, such as ChatGPT, Flan models, and LLaMA2 models, in both zero-shot and few-shot settings.
arXiv Detail & Related papers (2023-11-15T15:12:15Z) - MUSER: A Multi-View Similar Case Retrieval Dataset [65.36779942237357]
Similar case retrieval (SCR) is a representative legal AI application that plays a pivotal role in promoting judicial fairness.
Existing SCR datasets only focus on the fact description section when judging the similarity between cases.
We present M, a similar case retrieval dataset based on multi-view similarity measurement and comprehensive legal element with sentence-level legal element annotations.
arXiv Detail & Related papers (2023-10-24T08:17:11Z) - Interpretable Long-Form Legal Question Answering with
Retrieval-Augmented Large Language Models [10.834755282333589]
Long-form Legal Question Answering dataset comprises 1,868 expert-annotated legal questions in the French language.
Our experimental results demonstrate promising performance on automatic evaluation metrics.
As one of the only comprehensive, expert-annotated long-form LQA dataset, LLeQA has the potential to not only accelerate research towards resolving a significant real-world issue, but also act as a rigorous benchmark for evaluating NLP models in specialized domains.
arXiv Detail & Related papers (2023-09-29T08:23:19Z) - Towards Argument-Aware Abstractive Summarization of Long Legal Opinions
with Summary Reranking [6.9827388859232045]
We propose a simple approach for the abstractive summarization of long legal opinions that considers the argument structure of the document.
Our approach involves using argument role information to generate multiple candidate summaries, then reranking these candidates based on alignment with the document's argument structure.
We demonstrate the effectiveness of our approach on a dataset of long legal opinions and show that it outperforms several strong baselines.
arXiv Detail & Related papers (2023-06-01T13:44:45Z) - SAILER: Structure-aware Pre-trained Language Model for Legal Case
Retrieval [75.05173891207214]
Legal case retrieval plays a core role in the intelligent legal system.
Most existing language models have difficulty understanding the long-distance dependencies between different structures.
We propose a new Structure-Aware pre-traIned language model for LEgal case Retrieval.
arXiv Detail & Related papers (2023-04-22T10:47:01Z) - Legal Case Document Summarization: Extractive and Abstractive Methods
and their Evaluation [11.502115682980559]
Summarization of legal case judgement documents is a challenging problem in Legal NLP.
Not much analyses exist on how different families of summarization models perform when applied to legal case documents.
arXiv Detail & Related papers (2022-10-14T05:43:08Z) - Enhancing Legal Argument Mining with Domain Pre-training and Neural
Networks [0.45119235878273]
The contextual word embedding model, BERT, has proved its ability on downstream tasks with limited quantities of annotated data.
BERT and its variants help to reduce the burden of complex annotation work in many interdisciplinary research areas.
arXiv Detail & Related papers (2022-02-27T21:24:53Z) - AR-LSAT: Investigating Analytical Reasoning of Text [57.1542673852013]
We study the challenge of analytical reasoning of text and introduce a new dataset consisting of questions from the Law School Admission Test from 1991 to 2016.
We analyze what knowledge understanding and reasoning abilities are required to do well on this task.
arXiv Detail & Related papers (2021-04-14T02:53:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.