MACSum: Controllable Summarization with Mixed Attributes
- URL: http://arxiv.org/abs/2211.05041v2
- Date: Wed, 7 Jun 2023 02:02:51 GMT
- Title: MACSum: Controllable Summarization with Mixed Attributes
- Authors: Yusen Zhang, Yang Liu, Ziyi Yang, Yuwei Fang, Yulong Chen, Dragomir
Radev, Chenguang Zhu, Michael Zeng, Rui Zhang
- Abstract summary: MACSum is the first human-annotated summarization dataset for controlling mixed attributes.
We propose two simple and effective parameter-efficient approaches for the new task of mixed controllable summarization.
- Score: 56.685735509260276
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Controllable summarization allows users to generate customized summaries with
specified attributes. However, due to the lack of designated annotations of
controlled summaries, existing works have to craft pseudo datasets by adapting
generic summarization benchmarks. Furthermore, most research focuses on
controlling single attributes individually (e.g., a short summary or a highly
abstractive summary) rather than controlling a mix of attributes together
(e.g., a short and highly abstractive summary). In this paper, we propose
MACSum, the first human-annotated summarization dataset for controlling mixed
attributes. It contains source texts from two domains, news articles and
dialogues, with human-annotated summaries controlled by five designed
attributes (Length, Extractiveness, Specificity, Topic, and Speaker). We
propose two simple and effective parameter-efficient approaches for the new
task of mixed controllable summarization based on hard prompt tuning and soft
prefix tuning. Results and analysis demonstrate that hard prompt models yield
the best performance on all metrics and human evaluations. However,
mixed-attribute control is still challenging for summarization tasks. Our
dataset and code are available at https://github.com/psunlpgroup/MACSum.
Related papers
- AugSumm: towards generalizable speech summarization using synthetic
labels from large language model [61.73741195292997]
Abstractive speech summarization (SSUM) aims to generate human-like summaries from speech.
conventional SSUM models are mostly trained and evaluated with a single ground-truth (GT) human-annotated deterministic summary.
We propose AugSumm, a method to leverage large language models (LLMs) as a proxy for human annotators to generate augmented summaries.
arXiv Detail & Related papers (2024-01-10T18:39:46Z) - PromptSum: Parameter-Efficient Controllable Abstractive Summarization [4.145362426026615]
We introduce PromptSum, a method combining PT with a multi-task objective and discrete entity prompts for abstractive summarization.
Our model competitive ROUGE results on popular abstractive summarization benchmarks coupled with a strong level of controllability through entities.
arXiv Detail & Related papers (2023-08-06T13:54:14Z) - Spread Spurious Attribute: Improving Worst-group Accuracy with Spurious
Attribute Estimation [72.92329724600631]
We propose a pseudo-attribute-based algorithm, coined Spread Spurious Attribute, for improving the worst-group accuracy.
Our experiments on various benchmark datasets show that our algorithm consistently outperforms the baseline methods.
We also demonstrate that the proposed SSA can achieve comparable performances to methods using full (100%) spurious attribute supervision.
arXiv Detail & Related papers (2022-04-05T09:08:30Z) - Controllable Summarization with Constrained Markov Decision Process [50.04321779376415]
We study controllable text summarization which allows users to gain control on a particular attribute.
We propose a novel training framework based on Constrained Markov Decision Process (CMDP)
Our framework can be applied to control important attributes of summarization, including length, covered entities, and abstractiveness.
arXiv Detail & Related papers (2021-08-07T09:12:53Z) - CTRLsum: Towards Generic Controllable Text Summarization [54.69190421411766]
We presentsum, a novel framework for controllable summarization.
Our approach enables users to control multiple aspects of generated summaries by interacting with the summarization system.
Using a single unified model,sum is able to achieve a broad scope of summary manipulation at inference time.
arXiv Detail & Related papers (2020-12-08T08:54:36Z) - An Enhanced MeanSum Method For Generating Hotel Multi-Review
Summarizations [0.06091702876917279]
This work uses Multi-Aspect Masker(MAM) as content selector to address the issue with multi-aspect.
We also propose a regularizer to control the length of the generated summaries.
Our improved model achieves higher ROUGE, Sentiment Accuracy than the original Meansum method.
arXiv Detail & Related papers (2020-12-07T13:16:01Z) - Exploring Explainable Selection to Control Abstractive Summarization [51.74889133688111]
We develop a novel framework that focuses on explainability.
A novel pair-wise matrix captures the sentence interactions, centrality, and attribute scores.
A sentence-deployed attention mechanism in the abstractor ensures the final summary emphasizes the desired content.
arXiv Detail & Related papers (2020-04-24T14:39:34Z) - Interpretable Multi-Headed Attention for Abstractive Summarization at
Controllable Lengths [14.762731718325002]
Multi-level Summarizer (MLS) is a supervised method to construct abstractive summaries of a text document at controllable lengths.
MLS outperforms strong baselines by up to 14.70% in the METEOR score.
arXiv Detail & Related papers (2020-02-18T19:40:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.