Abstractive Summarization for Low Resource Data using Domain Transfer
and Data Synthesis
- URL: http://arxiv.org/abs/2002.03407v1
- Date: Sun, 9 Feb 2020 17:49:08 GMT
- Title: Abstractive Summarization for Low Resource Data using Domain Transfer
and Data Synthesis
- Authors: Ahmed Magooda, Diane Litman
- Abstract summary: We explore using domain transfer and data synthesis to improve the performance of recent abstractive summarization methods.
We show that tuning state of the art model trained on newspaper data could boost performance on student reflection data.
We propose a template-based model to synthesize new data, which when incorporated into training further increased ROUGE scores.
- Score: 1.148539813252112
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training abstractive summarization models typically requires large amounts of
data, which can be a limitation for many domains. In this paper we explore
using domain transfer and data synthesis to improve the performance of recent
abstractive summarization methods when applied to small corpora of student
reflections. First, we explored whether tuning state of the art model trained
on newspaper data could boost performance on student reflection data.
Evaluations demonstrated that summaries produced by the tuned model achieved
higher ROUGE scores compared to model trained on just student reflection data
or just newspaper data. The tuned model also achieved higher scores compared to
extractive summarization baselines, and additionally was judged to produce more
coherent and readable summaries in human evaluations. Second, we explored
whether synthesizing summaries of student data could additionally boost
performance. We proposed a template-based model to synthesize new data, which
when incorporated into training further increased ROUGE scores. Finally, we
showed that combining data synthesis with domain transfer achieved higher ROUGE
scores compared to only using one of the two approaches.
Related papers
- Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration [90.41908331897639]
Large language models (LLMs) have significantly benefited from training on diverse, high-quality task-specific data.
We present a novel approach, ReverseGen, designed to automatically generate effective training samples.
arXiv Detail & Related papers (2024-10-22T06:43:28Z) - Noisy Self-Training with Synthetic Queries for Dense Retrieval [49.49928764695172]
We introduce a novel noisy self-training framework combined with synthetic queries.
Experimental results show that our method improves consistently over existing methods.
Our method is data efficient and outperforms competitive baselines.
arXiv Detail & Related papers (2023-11-27T06:19:50Z) - Let's Synthesize Step by Step: Iterative Dataset Synthesis with Large
Language Models by Extrapolating Errors from Small Models [69.76066070227452]
*Data Synthesis* is a promising way to train a small model with very little labeled data.
We propose *Synthesis Step by Step* (**S3**), a data synthesis framework that shrinks this distribution gap.
Our approach improves the performance of a small model by reducing the gap between the synthetic dataset and the real data.
arXiv Detail & Related papers (2023-10-20T17:14:25Z) - Feedback-guided Data Synthesis for Imbalanced Classification [10.836265321046561]
We introduce a framework for augmenting static datasets with useful synthetic samples.
We find that the samples must be close to the support of the real data of the task at hand, and be sufficiently diverse.
On ImageNet-LT, we achieve state-of-the-art results, with over 4 percent improvement on underrepresented classes.
arXiv Detail & Related papers (2023-09-29T21:47:57Z) - Inverse Reinforcement Learning for Text Summarization [52.765898203824975]
We introduce inverse reinforcement learning (IRL) as an effective paradigm for training abstractive summarization models.
Experimental results across datasets in different domains demonstrate the superiority of our proposed IRL model for summarization over MLE and RL baselines.
arXiv Detail & Related papers (2022-12-19T23:45:05Z) - MeetSum: Transforming Meeting Transcript Summarization using
Transformers! [2.1915057426589746]
We utilize a Transformer-based Pointer Generator Network to generate abstract summaries for meeting transcripts.
This model uses 2 LSTMs as an encoder and a decoder, a Pointer network which copies words from the inputted text, and a Generator network to produce out-of-vocabulary words.
We show that training the model on a news summary dataset and using zero-shot learning to test it on the meeting dataset proves to produce better results than training it on the AMI meeting dataset.
arXiv Detail & Related papers (2021-08-13T16:34:09Z) - Improving Zero and Few-Shot Abstractive Summarization with Intermediate
Fine-tuning and Data Augmentation [101.26235068460551]
Models pretrained with self-supervised objectives on large text corpora achieve state-of-the-art performance on English text summarization tasks.
Models are typically fine-tuned on hundreds of thousands of data points, an infeasible requirement when applying summarization to new, niche domains.
We introduce a novel and generalizable method, called WikiTransfer, for fine-tuning pretrained models for summarization in an unsupervised, dataset-specific manner.
arXiv Detail & Related papers (2020-10-24T08:36:49Z) - Learning to summarize from human feedback [18.964548137315333]
We show that it is possible to significantly improve summary quality by training a model to optimize for human preferences.
We apply our method to a version of the TL;DR dataset of Reddit posts and find that our models significantly outperform both human reference summaries and much larger models fine-tuned with supervised learning alone.
Our models also transfer to CNN/DM news articles, producing summaries nearly as good as the human reference without any news-specific fine-tuning.
arXiv Detail & Related papers (2020-09-02T19:54:41Z) - Unsupervised Opinion Summarization with Noising and Denoising [85.49169453434554]
We create a synthetic dataset from a corpus of user reviews by sampling a review, pretending it is a summary, and generating noisy versions thereof.
At test time, the model accepts genuine reviews and generates a summary containing salient opinions, treating those that do not reach consensus as noise.
arXiv Detail & Related papers (2020-04-21T16:54:57Z) - Learning by Semantic Similarity Makes Abstractive Summarization Better [13.324006587838522]
We compare the generated summaries from recent LM, BART, and the reference summaries from a benchmark dataset, CNN/DM.
Interestingly, model-generated summaries receive higher scores relative to reference summaries.
arXiv Detail & Related papers (2020-02-18T17:59:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.