STRONG -- Structure Controllable Legal Opinion Summary Generation
- URL: http://arxiv.org/abs/2309.17280v1
- Date: Fri, 29 Sep 2023 14:31:41 GMT
- Title: STRONG -- Structure Controllable Legal Opinion Summary Generation
- Authors: Yang Zhong and Diane Litman
- Abstract summary: We propose an approach for the structure controllable summarization of long legal opinions.
Our approach involves using predicted argument role information to guide the model in generating coherent summaries.
- Score: 8.527175356478455
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We propose an approach for the structure controllable summarization of long
legal opinions that considers the argument structure of the document. Our
approach involves using predicted argument role information to guide the model
in generating coherent summaries that follow a provided structure pattern. We
demonstrate the effectiveness of our approach on a dataset of legal opinions
and show that it outperforms several strong baselines with respect to ROUGE,
BERTScore, and structure similarity.
Related papers
- A Methodology for Gradual Semantics for Structured Argumentation under Incomplete Information [15.717458041314194]
We provide a novel methodology for obtaining gradual semantics for structured argumentation frameworks.
Our methodology accommodates incomplete information about arguments' premises.
We demonstrate the potential of our approach by introducing two different instantiations of the methodology.
arXiv Detail & Related papers (2024-10-29T16:38:35Z) - Hierarchical Indexing for Retrieval-Augmented Opinion Summarization [60.5923941324953]
We propose a method for unsupervised abstractive opinion summarization that combines the attributability and scalability of extractive approaches with the coherence and fluency of Large Language Models (LLMs)
Our method, HIRO, learns an index structure that maps sentences to a path through a semantically organized discrete hierarchy.
At inference time, we populate the index and use it to identify and retrieve clusters of sentences containing popular opinions from input reviews.
arXiv Detail & Related papers (2024-03-01T10:38:07Z) - Document Structure in Long Document Transformers [64.76981299465885]
Long documents often exhibit structure with hierarchically organized elements of different functions, such as section headers and paragraphs.
Despite the omnipresence of document structure, its role in natural language processing (NLP) remains opaque.
Do long-document Transformer models acquire an internal representation of document structure during pre-training?
How can structural information be communicated to a model after pre-training, and how does it influence downstream performance?
arXiv Detail & Related papers (2024-01-31T08:28:06Z) - Towards Argument-Aware Abstractive Summarization of Long Legal Opinions
with Summary Reranking [6.9827388859232045]
We propose a simple approach for the abstractive summarization of long legal opinions that considers the argument structure of the document.
Our approach involves using argument role information to generate multiple candidate summaries, then reranking these candidates based on alignment with the document's argument structure.
We demonstrate the effectiveness of our approach on a dataset of long legal opinions and show that it outperforms several strong baselines.
arXiv Detail & Related papers (2023-06-01T13:44:45Z) - Incorporating Distributions of Discourse Structure for Long Document
Abstractive Summarization [11.168330694255404]
This paper introduces the 'RSTformer', a novel summarization model that comprehensively incorporates both the types and uncertainty of rhetorical relations.
Our RST-attention mechanism, rooted in document-level rhetorical structure, is an extension of the recently devised Longformer framework.
arXiv Detail & Related papers (2023-05-26T09:51:47Z) - StructGPT: A General Framework for Large Language Model to Reason over
Structured Data [117.13986738340027]
We develop an emphIterative Reading-then-Reasoning(IRR) approach for solving question answering tasks based on structured data.
Our approach can significantly boost the performance of ChatGPT and achieve comparable performance against the full-data supervised-tuning baselines.
arXiv Detail & Related papers (2023-05-16T17:45:23Z) - Computing and Exploiting Document Structure to Improve Unsupervised
Extractive Summarization of Legal Case Decisions [7.99536002595393]
We propose an unsupervised graph-based ranking model that uses a reweighting algorithm to exploit document structure.
Results on the Canadian Legal Case Law dataset show that our proposed method outperforms several strong baselines.
arXiv Detail & Related papers (2022-11-06T22:20:42Z) - Autoregressive Structured Prediction with Language Models [73.11519625765301]
We describe an approach to model structures as sequences of actions in an autoregressive manner with PLMs.
Our approach achieves the new state-of-the-art on all the structured prediction tasks we looked at.
arXiv Detail & Related papers (2022-10-26T13:27:26Z) - ArgLegalSumm: Improving Abstractive Summarization of Legal Documents
with Argument Mining [0.2538209532048867]
We introduce a technique to capture the argumentative structure of legal documents by integrating argument role labeling into the summarization process.
Experiments with pretrained language models show that our proposed approach improves performance over strong baselines.
arXiv Detail & Related papers (2022-09-04T15:55:56Z) - A Formalisation of Abstract Argumentation in Higher-Order Logic [77.34726150561087]
We present an approach for representing abstract argumentation frameworks based on an encoding into classical higher-order logic.
This provides a uniform framework for computer-assisted assessment of abstract argumentation frameworks using interactive and automated reasoning tools.
arXiv Detail & Related papers (2021-10-18T10:45:59Z) - Aspect-Controllable Opinion Summarization [58.5308638148329]
We propose an approach that allows the generation of customized summaries based on aspect queries.
Using a review corpus, we create a synthetic training dataset of (review, summary) pairs enriched with aspect controllers.
We fine-tune a pretrained model using our synthetic dataset and generate aspect-specific summaries by modifying the aspect controllers.
arXiv Detail & Related papers (2021-09-07T16:09:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.