Improving Zero and Few-Shot Abstractive Summarization with Intermediate
Fine-tuning and Data Augmentation
- URL: http://arxiv.org/abs/2010.12836v2
- Date: Sun, 11 Apr 2021 13:04:46 GMT
- Title: Improving Zero and Few-Shot Abstractive Summarization with Intermediate
Fine-tuning and Data Augmentation
- Authors: Alexander R. Fabbri, Simeng Han, Haoyuan Li, Haoran Li, Marjan
Ghazvininejad, Shafiq Joty, Dragomir Radev, Yashar Mehdad
- Abstract summary: Models pretrained with self-supervised objectives on large text corpora achieve state-of-the-art performance on English text summarization tasks.
Models are typically fine-tuned on hundreds of thousands of data points, an infeasible requirement when applying summarization to new, niche domains.
We introduce a novel and generalizable method, called WikiTransfer, for fine-tuning pretrained models for summarization in an unsupervised, dataset-specific manner.
- Score: 101.26235068460551
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Models pretrained with self-supervised objectives on large text corpora
achieve state-of-the-art performance on English text summarization tasks.
However, these models are typically fine-tuned on hundreds of thousands of data
points, an infeasible requirement when applying summarization to new, niche
domains. In this work, we introduce a novel and generalizable method, called
WikiTransfer, for fine-tuning pretrained models for summarization in an
unsupervised, dataset-specific manner. WikiTransfer fine-tunes pretrained
models on pseudo-summaries, produced from generic Wikipedia data, which contain
characteristics of the target dataset, such as the length and level of
abstraction of the desired summaries. WikiTransfer models achieve
state-of-the-art, zero-shot abstractive summarization performance on the
CNN-DailyMail dataset and demonstrate the effectiveness of our approach on
three additional diverse datasets. These models are more robust to noisy data
and also achieve better or comparable few-shot performance using 10 and 100
training examples when compared to few-shot transfer from other summarization
datasets. To further boost performance, we employ data augmentation via
round-trip translation as well as introduce a regularization term for improved
few-shot transfer. To understand the role of dataset aspects in transfer
performance and the quality of the resulting output summaries, we further study
the effect of the components of our unsupervised fine-tuning data and analyze
few-shot performance using both automatic and human evaluation.
Related papers
- A CLIP-Powered Framework for Robust and Generalizable Data Selection [51.46695086779598]
Real-world datasets often contain redundant and noisy data, imposing a negative impact on training efficiency and model performance.
Data selection has shown promise in identifying the most representative samples from the entire dataset.
We propose a novel CLIP-powered data selection framework that leverages multimodal information for more robust and generalizable sample selection.
arXiv Detail & Related papers (2024-10-15T03:00:58Z) - Not All Data Matters: An End-to-End Adaptive Dataset Pruning Framework
for Enhancing Model Performance and Efficiency [9.460023981858319]
We propose an end-to-end Adaptive DAtaset PRUNing framework called AdaPruner.
AdaPruner iteratively prunes redundant samples to an expected pruning ratio.
It can still significantly enhance model performance even after pruning up to 10-30% of the training data.
arXiv Detail & Related papers (2023-12-09T16:01:21Z) - ProGen: Progressive Zero-shot Dataset Generation via In-context Feedback [21.168991554983815]
We propose a progressive zero-shot dataset generation framework, ProGen, to guide the generation of new training data.
We show ProGen achieves on-par or superior performance with only 1% synthetic dataset size.
arXiv Detail & Related papers (2022-10-22T02:07:10Z) - Beyond Transfer Learning: Co-finetuning for Action Localisation [64.07196901012153]
We propose co-finetuning -- simultaneously training a single model on multiple upstream'' and downstream'' tasks.
We demonstrate that co-finetuning outperforms traditional transfer learning when using the same total amount of data.
We also show how we can easily extend our approach to multiple upstream'' datasets to further improve performance.
arXiv Detail & Related papers (2022-07-08T10:25:47Z) - Efficient Few-Shot Fine-Tuning for Opinion Summarization [83.76460801568092]
Abstractive summarization models are typically pre-trained on large amounts of generic texts, then fine-tuned on tens or hundreds of thousands of annotated samples.
We show that a few-shot method based on adapters can easily store in-domain knowledge.
We show that this self-supervised adapter pre-training improves summary quality over standard fine-tuning by 2.0 and 1.3 ROUGE-L points on the Amazon and Yelp datasets.
arXiv Detail & Related papers (2022-05-04T16:38:37Z) - MSeg: A Composite Dataset for Multi-domain Semantic Segmentation [100.17755160696939]
We present MSeg, a composite dataset that unifies semantic segmentation datasets from different domains.
We reconcile the generalization and bring the pixel-level annotations into alignment by relabeling more than 220,000 object masks in more than 80,000 images.
A model trained on MSeg ranks first on the WildDash-v1 leaderboard for robust semantic segmentation, with no exposure to WildDash data during training.
arXiv Detail & Related papers (2021-12-27T16:16:35Z) - CDEvalSumm: An Empirical Study of Cross-Dataset Evaluation for Neural
Summarization Systems [121.78477833009671]
We investigate the performance of different summarization models under a cross-dataset setting.
A comprehensive study of 11 representative summarization systems on 5 datasets from different domains reveals the effect of model architectures and generation ways.
arXiv Detail & Related papers (2020-10-11T02:19:15Z) - Learning to summarize from human feedback [18.964548137315333]
We show that it is possible to significantly improve summary quality by training a model to optimize for human preferences.
We apply our method to a version of the TL;DR dataset of Reddit posts and find that our models significantly outperform both human reference summaries and much larger models fine-tuned with supervised learning alone.
Our models also transfer to CNN/DM news articles, producing summaries nearly as good as the human reference without any news-specific fine-tuning.
arXiv Detail & Related papers (2020-09-02T19:54:41Z) - Abstractive Summarization for Low Resource Data using Domain Transfer
and Data Synthesis [1.148539813252112]
We explore using domain transfer and data synthesis to improve the performance of recent abstractive summarization methods.
We show that tuning state of the art model trained on newspaper data could boost performance on student reflection data.
We propose a template-based model to synthesize new data, which when incorporated into training further increased ROUGE scores.
arXiv Detail & Related papers (2020-02-09T17:49:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.