Efficient and Interpretable Compressive Text Summarisation with
Unsupervised Dual-Agent Reinforcement Learning
- URL: http://arxiv.org/abs/2306.03415v1
- Date: Tue, 6 Jun 2023 05:30:49 GMT
- Title: Efficient and Interpretable Compressive Text Summarisation with
Unsupervised Dual-Agent Reinforcement Learning
- Authors: Peggy Tang, Junbin Gao, Lei Zhang, Zhiyong Wang
- Abstract summary: We propose an efficient and interpretable compressive summarisation method using unsupervised dual-agent reinforcement learning.
Our model achieves promising performance and a significant improvement on Newsroom in terms of the ROUGE metric.
- Score: 36.93582300019002
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, compressive text summarisation offers a balance between the
conciseness issue of extractive summarisation and the factual hallucination
issue of abstractive summarisation. However, most existing compressive
summarisation methods are supervised, relying on the expensive effort of
creating a new training dataset with corresponding compressive summaries. In
this paper, we propose an efficient and interpretable compressive summarisation
method that utilises unsupervised dual-agent reinforcement learning to optimise
a summary's semantic coverage and fluency by simulating human judgment on
summarisation quality. Our model consists of an extractor agent and a
compressor agent, and both agents have a multi-head attentional pointer-based
structure. The extractor agent first chooses salient sentences from a document,
and then the compressor agent compresses these extracted sentences by selecting
salient words to form a summary without using reference summaries to compute
the summary reward. To our best knowledge, this is the first work on
unsupervised compressive summarisation. Experimental results on three widely
used datasets (e.g., Newsroom, CNN/DM, and XSum) show that our model achieves
promising performance and a significant improvement on Newsroom in terms of the
ROUGE metric, as well as interpretability of semantic coverage of summarisation
results.
Related papers
- Towards Enhancing Coherence in Extractive Summarization: Dataset and Experiments with LLMs [70.15262704746378]
We propose a systematically created human-annotated dataset consisting of coherent summaries for five publicly available datasets and natural language user feedback.
Preliminary experiments with Falcon-40B and Llama-2-13B show significant performance improvements (10% Rouge-L) in terms of producing coherent summaries.
arXiv Detail & Related papers (2024-07-05T20:25:04Z) - Enhancing Coherence of Extractive Summarization with Multitask Learning [40.349019691412465]
This study proposes a multitask learning architecture for extractive summarization with coherence boosting.
The architecture contains an extractive summarizer and coherent discriminator module.
Experiments show that our proposed method significantly improves the proportion of consecutive sentences in the extracted summaries.
arXiv Detail & Related papers (2023-05-22T09:20:58Z) - Generating Multiple-Length Summaries via Reinforcement Learning for
Unsupervised Sentence Summarization [44.835811239393244]
Sentence summarization shortens given texts while maintaining core contents of the texts.
Unsupervised approaches have been studied to summarize texts without human-written summaries.
We devise an abstractive model based on reinforcement learning without ground-truth summaries.
arXiv Detail & Related papers (2022-12-21T08:34:28Z) - MACSum: Controllable Summarization with Mixed Attributes [56.685735509260276]
MACSum is the first human-annotated summarization dataset for controlling mixed attributes.
We propose two simple and effective parameter-efficient approaches for the new task of mixed controllable summarization.
arXiv Detail & Related papers (2022-11-09T17:17:37Z) - Salience Allocation as Guidance for Abstractive Summarization [61.31826412150143]
We propose a novel summarization approach with a flexible and reliable salience guidance, namely SEASON (SaliencE Allocation as Guidance for Abstractive SummarizatiON)
SEASON utilizes the allocation of salience expectation to guide abstractive summarization and adapts well to articles in different abstractiveness.
arXiv Detail & Related papers (2022-10-22T02:13:44Z) - COLO: A Contrastive Learning based Re-ranking Framework for One-Stage
Summarization [84.70895015194188]
We propose a Contrastive Learning based re-ranking framework for one-stage summarization called COLO.
COLO boosts the extractive and abstractive results of one-stage systems on CNN/DailyMail benchmark to 44.58 and 46.33 ROUGE-1 score.
arXiv Detail & Related papers (2022-09-29T06:11:21Z) - Improving Multi-Document Summarization through Referenced Flexible
Extraction with Credit-Awareness [21.037841262371355]
A notable challenge in Multi-Document Summarization (MDS) is the extremely-long length of the input.
We present an extract-then-abstract Transformer framework to overcome the problem.
We propose a loss weighting mechanism that makes the model aware of the unequal importance for the sentences not in the pseudo extraction oracle.
arXiv Detail & Related papers (2022-05-04T04:40:39Z) - EASE: Extractive-Abstractive Summarization with Explanations [18.046254486733186]
We present an explainable summarization system based on the Information Bottleneck principle.
Inspired by previous research that humans use a two-stage framework to summarize long documents, our framework first extracts a pre-defined amount of evidence spans as explanations.
We show that explanations from our framework are more relevant than simple baselines, without substantially sacrificing the quality of the generated summary.
arXiv Detail & Related papers (2021-05-14T17:45:06Z) - The Summary Loop: Learning to Write Abstractive Summaries Without
Examples [21.85348918324668]
This work presents a new approach to unsupervised abstractive summarization based on maximizing a combination of coverage and fluency for a given length constraint.
Key terms are masked out of the original document and must be filled in by a coverage model using the current generated summary.
When tested on popular news summarization datasets, the method outperforms previous unsupervised methods by more than 2 R-1 points.
arXiv Detail & Related papers (2021-05-11T23:19:46Z) - Multi-Fact Correction in Abstractive Text Summarization [98.27031108197944]
Span-Fact is a suite of two factual correction models that leverages knowledge learned from question answering models to make corrections in system-generated summaries via span selection.
Our models employ single or multi-masking strategies to either iteratively or auto-regressively replace entities in order to ensure semantic consistency w.r.t. the source text.
Experiments show that our models significantly boost the factual consistency of system-generated summaries without sacrificing summary quality in terms of both automatic metrics and human evaluation.
arXiv Detail & Related papers (2020-10-06T02:51:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.