Attend to the Right Context: A Plug-and-Play Module for
Content-Controllable Summarization
- URL: http://arxiv.org/abs/2212.10819v1
- Date: Wed, 21 Dec 2022 07:17:32 GMT
- Title: Attend to the Right Context: A Plug-and-Play Module for
Content-Controllable Summarization
- Authors: Wen Xiao, Lesly Miculicich, Yang Liu, Pengcheng He, Giuseppe Carenini
- Abstract summary: We propose a plug-and-play module RelAttn to adapt any general summarizers to the content-controllable summarization task.
Experiments show that our method effectively improves all the summarizers, and outperforms the prefix-based method and a widely used plug-and-play model.
- Score: 38.894418920684366
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Content-Controllable Summarization generates summaries focused on the given
controlling signals. Due to the lack of large-scale training corpora for the
task, we propose a plug-and-play module RelAttn to adapt any general
summarizers to the content-controllable summarization task. RelAttn first
identifies the relevant content in the source documents, and then makes the
model attend to the right context by directly steering the attention weight. We
further apply an unsupervised online adaptive parameter searching algorithm to
determine the degree of control in the zero-shot setting, while such parameters
are learned in the few-shot setting. By applying the module to three backbone
summarization models, experiments show that our method effectively improves all
the summarizers, and outperforms the prefix-based method and a widely used
plug-and-play model in both zero- and few-shot settings. Tellingly, more
benefit is observed in the scenarios when more control is needed.
Related papers
- Readout Guidance: Learning Control from Diffusion Features [96.22155562120231]
We present Readout Guidance, a method for controlling text-to-image diffusion models with learned signals.
Readout Guidance uses readout heads, lightweight networks trained to extract signals from the features of a pre-trained, frozen diffusion model at every timestep.
These readouts can encode single-image properties, such as pose, depth, and edges; or higher-order properties that relate multiple images, such as correspondence and appearance similarity.
arXiv Detail & Related papers (2023-12-04T18:59:32Z) - MACSum: Controllable Summarization with Mixed Attributes [56.685735509260276]
MACSum is the first human-annotated summarization dataset for controlling mixed attributes.
We propose two simple and effective parameter-efficient approaches for the new task of mixed controllable summarization.
arXiv Detail & Related papers (2022-11-09T17:17:37Z) - Model ensemble instead of prompt fusion: a sample-specific knowledge
transfer method for few-shot prompt tuning [85.55727213502402]
We focus on improving the few-shot performance of prompt tuning by transferring knowledge from soft prompts of source tasks.
We propose Sample-specific Ensemble of Source Models (SESoM)
SESoM learns to adjust the contribution of each source model for each target sample separately when ensembling source model outputs.
arXiv Detail & Related papers (2022-10-23T01:33:16Z) - Improving the Faithfulness of Abstractive Summarization via Entity
Coverage Control [27.214742188672464]
We propose a method to remedy entity-level hallucinations with Entity Coverage Control (ECC)
ECC computes entity coverage precision and prepend the corresponding control code for each training example.
We show that the proposed method leads to more faithful and salient abstractive summarization in supervised fine-tuning and zero-shot settings.
arXiv Detail & Related papers (2022-07-05T18:52:19Z) - Aspect-Controllable Opinion Summarization [58.5308638148329]
We propose an approach that allows the generation of customized summaries based on aspect queries.
Using a review corpus, we create a synthetic training dataset of (review, summary) pairs enriched with aspect controllers.
We fine-tune a pretrained model using our synthetic dataset and generate aspect-specific summaries by modifying the aspect controllers.
arXiv Detail & Related papers (2021-09-07T16:09:17Z) - CTRLsum: Towards Generic Controllable Text Summarization [54.69190421411766]
We presentsum, a novel framework for controllable summarization.
Our approach enables users to control multiple aspects of generated summaries by interacting with the summarization system.
Using a single unified model,sum is able to achieve a broad scope of summary manipulation at inference time.
arXiv Detail & Related papers (2020-12-08T08:54:36Z) - Self-Supervised and Controlled Multi-Document Opinion Summarization [16.674646504295687]
We propose a self-supervised setup that considers an individual document as a target summary for a set of similar documents.
We address the problem of hallucinations through the use of control codes.
Our benchmarks on two datasets against graph-based and recent neural abstractive unsupervised models show that our proposed method generates summaries with a superior quality and relevance.
arXiv Detail & Related papers (2020-04-30T13:20:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.