Abstractive Query Focused Summarization with Query-Free Resources
- URL: http://arxiv.org/abs/2012.14774v1
- Date: Tue, 29 Dec 2020 14:39:35 GMT
- Title: Abstractive Query Focused Summarization with Query-Free Resources
- Authors: Yumo Xu and Mirella Lapata
- Abstract summary: In this work, we consider the problem of leveraging only generic summarization resources to build an abstractive QFS system.
We propose Marge, a Masked ROUGE Regression framework composed of a novel unified representation for summaries and queries.
Despite learning from minimal supervision, our system achieves state-of-the-art results in the distantly supervised setting.
- Score: 60.468323530248945
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The availability of large-scale datasets has driven the development of neural
sequence-to-sequence models to generate generic summaries, i.e., summaries
which do not correspond to any pre-specified queries. However, due to the lack
of training data, query focused summarization (QFS) has been studied mainly
with extractive methods. In this work, we consider the problem of leveraging
only generic summarization resources to build an abstractive QFS system. We
propose Marge, a Masked ROUGE Regression framework composed of a novel unified
representation for summaries and queries, and a distantly supervised training
task for answer evidence estimation. To further utilize generic data for
generation, three attributes are incorporated during training and inference to
control the shape of the final summary: evidence rank, query guidance, and
summary length. Despite learning from minimal supervision, our system achieves
state-of-the-art results in the distantly supervised setting across domains and
query types.
Related papers
- QontSum: On Contrasting Salient Content for Query-focused Summarization [22.738731393540633]
Query-focused summarization (QFS) is a challenging task in natural language processing that generates summaries to address specific queries.
This paper highlights the role of QFS in Grounded Answer Generation (GAR)
We propose QontSum, a novel approach for QFS that leverages contrastive learning to help the model attend to the most relevant regions of the input document.
arXiv Detail & Related papers (2023-07-14T19:25:35Z) - Multimodal Prompt Retrieval for Generative Visual Question Answering [9.973591610073006]
We propose a novel generative model enhanced by multimodal prompt retrieval (MPR) that integrates retrieved prompts and multimodal features to generate answers in free text.
Our experiments on medical VQA tasks show that MPR outperforms its non-retrieval counterpart by up to 30% accuracy points in a few-shot domain adaptation setting.
arXiv Detail & Related papers (2023-06-30T14:06:13Z) - LMGQS: A Large-scale Dataset for Query-focused Summarization [77.6179359525065]
We convert four generic summarization benchmarks into a new QFS benchmark dataset, LMGQS.
We establish baselines with state-of-the-art summarization models.
We achieve state-of-the-art zero-shot and supervised performance on multiple existing QFS benchmarks.
arXiv Detail & Related papers (2023-05-22T14:53:45Z) - MQAG: Multiple-choice Question Answering and Generation for Assessing
Information Consistency in Summarization [55.60306377044225]
State-of-the-art summarization systems can generate highly fluent summaries.
These summaries, however, may contain factual inconsistencies and/or information not present in the source.
We introduce an alternative scheme based on standard information-theoretic measures in which the information present in the source and summary is directly compared.
arXiv Detail & Related papers (2023-01-28T23:08:25Z) - Exploring Neural Models for Query-Focused Summarization [74.41256438059256]
We conduct a systematic exploration of neural approaches to query-focused summarization (QFS)
We present two model extensions that achieve state-of-the-art performance on the QMSum dataset by a margin of up to 3.38 ROUGE-1, 3.72 ROUGE-2, and 3.28 ROUGE-L.
arXiv Detail & Related papers (2021-12-14T18:33:29Z) - Text Summarization with Latent Queries [60.468323530248945]
We introduce LaQSum, the first unified text summarization system that learns Latent Queries from documents for abstractive summarization with any existing query forms.
Under a deep generative framework, our system jointly optimize a latent query model and a conditional language model, allowing users to plug-and-play queries of any type at test time.
Our system robustly outperforms strong comparison systems across summarization benchmarks with different query types, document settings, and target domains.
arXiv Detail & Related papers (2021-05-31T21:14:58Z) - Improve Query Focused Abstractive Summarization by Incorporating Answer
Relevance [43.820971952979875]
We propose QFS-BART, a model that incorporates the explicit answer relevance of the source documents given the query via a question answering model.
Our model can take advantage of large pre-trained models which improve the summarization performance significantly.
Empirical results on the Debatepedia dataset show that the proposed model achieves the new state-of-the-art performance.
arXiv Detail & Related papers (2021-05-27T06:58:42Z) - WSL-DS: Weakly Supervised Learning with Distant Supervision for Query
Focused Multi-Document Abstractive Summarization [16.048329028104643]
In the Query Focused Multi-Document Summarization (QF-MDS) task, a set of documents and a query are given where the goal is to generate a summary from these documents.
One major challenge for this task is the lack of availability of labeled training datasets.
We propose a novel weakly supervised learning approach via utilizing distant supervision.
arXiv Detail & Related papers (2020-11-03T02:02:55Z) - Multi-Fact Correction in Abstractive Text Summarization [98.27031108197944]
Span-Fact is a suite of two factual correction models that leverages knowledge learned from question answering models to make corrections in system-generated summaries via span selection.
Our models employ single or multi-masking strategies to either iteratively or auto-regressively replace entities in order to ensure semantic consistency w.r.t. the source text.
Experiments show that our models significantly boost the factual consistency of system-generated summaries without sacrificing summary quality in terms of both automatic metrics and human evaluation.
arXiv Detail & Related papers (2020-10-06T02:51:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.