Improving Factual Consistency of Abstractive Summarization on Customer
Feedback
- URL: http://arxiv.org/abs/2106.16188v1
- Date: Wed, 30 Jun 2021 16:34:36 GMT
- Title: Improving Factual Consistency of Abstractive Summarization on Customer
Feedback
- Authors: Yang Liu, Yifei Sun, Vincent Gao
- Abstract summary: E-commerce stores collect customer feedback to let sellers learn about customer concerns and enhance customer order experience.
A concise summary of the feedback can be generated to help sellers better understand the issues causing customer dissatisfaction.
Previous state-of-the-art abstractive text summarization models make two major types of factual errors when producing summaries from customer feedback.
- Score: 5.084731309706487
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: E-commerce stores collect customer feedback to let sellers learn about
customer concerns and enhance customer order experience. Because customer
feedback often contains redundant information, a concise summary of the
feedback can be generated to help sellers better understand the issues causing
customer dissatisfaction. Previous state-of-the-art abstractive text
summarization models make two major types of factual errors when producing
summaries from customer feedback, which are wrong entity detection (WED) and
incorrect product-defect description (IPD). In this work, we introduce a set of
methods to enhance the factual consistency of abstractive summarization on
customer feedback. We augment the training data with artificially corrupted
summaries, and use them as counterparts of the target summaries. We add a
contrastive loss term into the training objective so that the model learns to
avoid certain factual errors. Evaluation results show that a large portion of
WED and IPD errors are alleviated for BART and T5. Furthermore, our approaches
do not depend on the structure of the summarization model and thus are
generalizable to any abstractive summarization systems.
Related papers
- Towards Personalized Review Summarization by Modeling Historical Reviews
from Customer and Product Separately [59.61932899841944]
Review summarization is a non-trivial task that aims to summarize the main idea of the product review in the E-commerce website.
We propose the Heterogeneous Historical Review aware Review Summarization Model (HHRRS)
We employ a multi-task framework that conducts the review sentiment classification and summarization jointly.
arXiv Detail & Related papers (2023-01-27T12:32:55Z) - Efficient Few-Shot Fine-Tuning for Opinion Summarization [83.76460801568092]
Abstractive summarization models are typically pre-trained on large amounts of generic texts, then fine-tuned on tens or hundreds of thousands of annotated samples.
We show that a few-shot method based on adapters can easily store in-domain knowledge.
We show that this self-supervised adapter pre-training improves summary quality over standard fine-tuning by 2.0 and 1.3 ROUGE-L points on the Amazon and Yelp datasets.
arXiv Detail & Related papers (2022-05-04T16:38:37Z) - CLIFF: Contrastive Learning for Improving Faithfulness and Factuality in
Abstractive Summarization [6.017006996402699]
We study generating abstractive summaries that are faithful and factually consistent with the given articles.
A novel contrastive learning formulation is presented, which leverages both reference summaries, as positive training data, and automatically generated erroneous summaries, as negative training data, to train summarization systems that are better at distinguishing between them.
arXiv Detail & Related papers (2021-09-19T20:05:21Z) - Multi-Fact Correction in Abstractive Text Summarization [98.27031108197944]
Span-Fact is a suite of two factual correction models that leverages knowledge learned from question answering models to make corrections in system-generated summaries via span selection.
Our models employ single or multi-masking strategies to either iteratively or auto-regressively replace entities in order to ensure semantic consistency w.r.t. the source text.
Experiments show that our models significantly boost the factual consistency of system-generated summaries without sacrificing summary quality in terms of both automatic metrics and human evaluation.
arXiv Detail & Related papers (2020-10-06T02:51:02Z) - Few-Shot Learning for Opinion Summarization [117.70510762845338]
Opinion summarization is the automatic creation of text reflecting subjective information expressed in multiple documents.
In this work, we show that even a handful of summaries is sufficient to bootstrap generation of the summary text.
Our approach substantially outperforms previous extractive and abstractive methods in automatic and human evaluation.
arXiv Detail & Related papers (2020-04-30T15:37:38Z) - Unsupervised Opinion Summarization with Noising and Denoising [85.49169453434554]
We create a synthetic dataset from a corpus of user reviews by sampling a review, pretending it is a summary, and generating noisy versions thereof.
At test time, the model accepts genuine reviews and generates a summary containing salient opinions, treating those that do not reach consensus as noise.
arXiv Detail & Related papers (2020-04-21T16:54:57Z) - Enhancing Factual Consistency of Abstractive Summarization [57.67609672082137]
We propose a fact-aware summarization model FASum to extract and integrate factual relations into the summary generation process.
We then design a factual corrector model FC to automatically correct factual errors from summaries generated by existing systems.
arXiv Detail & Related papers (2020-03-19T07:36:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.