RTSUM: Relation Triple-based Interpretable Summarization with Multi-level Salience Visualization
- URL: http://arxiv.org/abs/2310.13895v2
- Date: Mon, 25 Mar 2024 13:41:32 GMT
- Title: RTSUM: Relation Triple-based Interpretable Summarization with Multi-level Salience Visualization
- Authors: Seonglae Cho, Yonggi Cho, HoonJae Lee, Myungha Jang, Jinyoung Yeo, Dongha Lee,
- Abstract summary: We present RTSUM, an unsupervised summarization framework that utilizes relation triples as the basic unit for summarization.
We also develop a web demo for an interpretable summarizing tool, providing fine-grained interpretations with the output summary.
- Score: 12.890135367392524
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we present RTSUM, an unsupervised summarization framework that utilizes relation triples as the basic unit for summarization. Given an input document, RTSUM first selects salient relation triples via multi-level salience scoring and then generates a concise summary from the selected relation triples by using a text-to-text language model. On the basis of RTSUM, we also develop a web demo for an interpretable summarizing tool, providing fine-grained interpretations with the output summary. With support for customization options, our tool visualizes the salience for textual units at three distinct levels: sentences, relation triples, and phrases. The codes,are publicly available.
Related papers
- MixSumm: Topic-based Data Augmentation using LLMs for Low-resource Extractive Text Summarization [8.432813041805831]
We propose MixSumm for low-resource extractive text summarization.
Specifically, MixSumm prompts an open-source LLM, LLaMA-3-70b, to generate documents that mix information from multiple topics.
We use ROUGE scores and L-Eval, a reference-free LLaMA-3-based evaluation method to measure the quality of generated summaries.
arXiv Detail & Related papers (2024-07-10T03:25:47Z) - Source Identification in Abstractive Summarization [0.8883733362171033]
We define input sentences that contain essential information in the generated summary as $textitsource sentences$ and study how abstractive summaries are made by analyzing the source sentences.
We formulate automatic source sentence detection and compare multiple methods to establish a strong baseline for the task.
Experimental results show that the perplexity-based method performs well in highly abstractive settings, while similarity-based methods robustly in relatively extractive settings.
arXiv Detail & Related papers (2024-02-07T09:09:09Z) - Prompt Based Tri-Channel Graph Convolution Neural Network for Aspect
Sentiment Triplet Extraction [63.0205418944714]
Aspect Sentiment Triplet Extraction (ASTE) is an emerging task to extract a given sentence's triplets, which consist of aspects, opinions, and sentiments.
Recent studies tend to address this task with a table-filling paradigm, wherein word relations are encoded in a two-dimensional table.
We propose a novel model for the ASTE task, called Prompt-based Tri-Channel Graph Convolution Neural Network (PT-GCN), which converts the relation table into a graph to explore more comprehensive relational information.
arXiv Detail & Related papers (2023-12-18T12:46:09Z) - On Context Utilization in Summarization with Large Language Models [83.84459732796302]
Large language models (LLMs) excel in abstractive summarization tasks, delivering fluent and pertinent summaries.
Recent advancements have extended their capabilities to handle long-input contexts, exceeding 100k tokens.
We conduct the first comprehensive study on context utilization and position bias in summarization.
arXiv Detail & Related papers (2023-10-16T16:45:12Z) - RelationPrompt: Leveraging Prompts to Generate Synthetic Data for
Zero-Shot Relation Triplet Extraction [65.4337085607711]
We introduce the task setting of Zero-Shot Relation Triplet Extraction (ZeroRTE)
Given an input sentence, each extracted triplet consists of the head entity, relation label, and tail entity where the relation label is not seen at the training stage.
We propose to synthesize relation examples by prompting language models to generate structured texts.
arXiv Detail & Related papers (2022-03-17T05:55:14Z) - Document-level Relation Extraction as Semantic Segmentation [38.614931876015625]
Document-level relation extraction aims to extract relations among multiple entity pairs from a document.
This paper approaches the problem by predicting an entity-level relation matrix to capture local and global information.
We propose a Document U-shaped Network for document-level relation extraction.
arXiv Detail & Related papers (2021-06-07T13:44:44Z) - Text Summarization with Latent Queries [60.468323530248945]
We introduce LaQSum, the first unified text summarization system that learns Latent Queries from documents for abstractive summarization with any existing query forms.
Under a deep generative framework, our system jointly optimize a latent query model and a conditional language model, allowing users to plug-and-play queries of any type at test time.
Our system robustly outperforms strong comparison systems across summarization benchmarks with different query types, document settings, and target domains.
arXiv Detail & Related papers (2021-05-31T21:14:58Z) - CTRLsum: Towards Generic Controllable Text Summarization [54.69190421411766]
We presentsum, a novel framework for controllable summarization.
Our approach enables users to control multiple aspects of generated summaries by interacting with the summarization system.
Using a single unified model,sum is able to achieve a broad scope of summary manipulation at inference time.
arXiv Detail & Related papers (2020-12-08T08:54:36Z) - Fact-level Extractive Summarization with Hierarchical Graph Mask on BERT [9.271716501646194]
We propose to extract fact-level semantic units for better extractive summarization.
We incorporate our model with BERT using a hierarchical graph mask.
Experiments on the CNN/DaliyMail dataset show that our model achieves state-of-the-art results.
arXiv Detail & Related papers (2020-11-19T09:29:51Z) - Extractive Summarization as Text Matching [123.09816729675838]
This paper creates a paradigm shift with regard to the way we build neural extractive summarization systems.
We formulate the extractive summarization task as a semantic text matching problem.
We have driven the state-of-the-art extractive result on CNN/DailyMail to a new level (44.41 in ROUGE-1)
arXiv Detail & Related papers (2020-04-19T08:27:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.