LMGQS: A Large-scale Dataset for Query-focused Summarization
- URL: http://arxiv.org/abs/2305.13086v1
- Date: Mon, 22 May 2023 14:53:45 GMT
- Title: LMGQS: A Large-scale Dataset for Query-focused Summarization
- Authors: Ruochen Xu, Song Wang, Yang Liu, Shuohang Wang, Yichong Xu, Dan Iter,
Chenguang Zhu, Michael Zeng
- Abstract summary: We convert four generic summarization benchmarks into a new QFS benchmark dataset, LMGQS.
We establish baselines with state-of-the-art summarization models.
We achieve state-of-the-art zero-shot and supervised performance on multiple existing QFS benchmarks.
- Score: 77.6179359525065
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Query-focused summarization (QFS) aims to extract or generate a summary of an
input document that directly answers or is relevant to a given query. The lack
of large-scale datasets in the form of documents, queries, and summaries has
hindered model development in this area. In contrast, multiple large-scale
high-quality datasets for generic summarization exist. We hypothesize that
there is a hidden query for each summary sentence in a generic summarization
annotation, and we utilize a large-scale pretrained language model to recover
it. In this way, we convert four generic summarization benchmarks into a new
QFS benchmark dataset, LMGQS, which consists of over 1 million
document-query-summary samples. We thoroughly investigate the properties of our
proposed dataset and establish baselines with state-of-the-art summarization
models. By fine-tuning a language model on LMGQS, we achieve state-of-the-art
zero-shot and supervised performance on multiple existing QFS benchmarks,
demonstrating the high quality and diversity of LMGQS.
Related papers
- IDEAL: Leveraging Infinite and Dynamic Characterizations of Large Language Models for Query-focused Summarization [59.06663981902496]
Query-focused summarization (QFS) aims to produce summaries that answer particular questions of interest, enabling greater user control and personalization.
We investigate two indispensable characteristics that the LLMs-based QFS models should be harnessed, Lengthy Document Summarization and Efficiently Fine-grained Query-LLM Alignment.
These innovations pave the way for broader application and accessibility in the field of QFS technology.
arXiv Detail & Related papers (2024-07-15T07:14:56Z) - A Lightweight Constrained Generation Alternative for Query-focused
Summarization [8.264410236351111]
Query-focused summarization (QFS) aims to provide a summary of a document that satisfies information need of a given query.
We propose leveraging a recently developed constrained generation model Neurological Decoding (NLD) as an alternative to current QFS regimes.
We demonstrate the efficacy of this approach on two public QFS collections achieving near parity with the state-of-the-art model with substantially reduced complexity.
arXiv Detail & Related papers (2023-04-23T18:43:48Z) - UniSumm and SummZoo: Unified Model and Diverse Benchmark for Few-Shot
Summarization [54.59104881168188]
textscUniSumm is a unified few-shot summarization model pre-trained with multiple summarization tasks.
textscSummZoo is a new benchmark to better evaluate few-shot summarizers.
arXiv Detail & Related papers (2022-11-17T18:54:47Z) - Text Summarization with Latent Queries [60.468323530248945]
We introduce LaQSum, the first unified text summarization system that learns Latent Queries from documents for abstractive summarization with any existing query forms.
Under a deep generative framework, our system jointly optimize a latent query model and a conditional language model, allowing users to plug-and-play queries of any type at test time.
Our system robustly outperforms strong comparison systems across summarization benchmarks with different query types, document settings, and target domains.
arXiv Detail & Related papers (2021-05-31T21:14:58Z) - Data Augmentation for Abstractive Query-Focused Multi-Document
Summarization [129.96147867496205]
We present two QMDS training datasets, which we construct using two data augmentation methods.
These two datasets have complementary properties, i.e., QMDSCNN has real summaries but queries are simulated, while QMDSIR has real queries but simulated summaries.
We build end-to-end neural network models on the combined datasets that yield new state-of-the-art transfer results on DUC datasets.
arXiv Detail & Related papers (2021-03-02T16:57:01Z) - Abstractive Query Focused Summarization with Query-Free Resources [60.468323530248945]
In this work, we consider the problem of leveraging only generic summarization resources to build an abstractive QFS system.
We propose Marge, a Masked ROUGE Regression framework composed of a novel unified representation for summaries and queries.
Despite learning from minimal supervision, our system achieves state-of-the-art results in the distantly supervised setting.
arXiv Detail & Related papers (2020-12-29T14:39:35Z) - QBSUM: a Large-Scale Query-Based Document Summarization Dataset from
Real-world Applications [20.507631900617817]
We present QBSUM, a high-quality large-scale dataset consisting of 49,000+ data samples for the task of Chinese query-based document summarization.
We also propose multiple unsupervised and supervised solutions to the task and demonstrate their high-speed inference and superior performance via both offline experiments and online A/B tests.
arXiv Detail & Related papers (2020-10-27T07:30:04Z) - AQuaMuSe: Automatically Generating Datasets for Query-Based
Multi-Document Summarization [17.098075160558576]
We propose a scalable approach called AQuaMuSe to automatically mine qMDS examples from question answering datasets and large document corpora.
We publicly release a specific instance of an AQuaMuSe dataset with 5,519 query-based summaries, each associated with an average of 6 input documents selected from an index of 355M documents from Common Crawl.
arXiv Detail & Related papers (2020-10-23T22:38:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.