Few-shot Query-Focused Summarization with Prefix-Merging
- URL: http://arxiv.org/abs/2211.16164v1
- Date: Tue, 29 Nov 2022 12:48:37 GMT
- Title: Few-shot Query-Focused Summarization with Prefix-Merging
- Authors: Ruifeng Yuan, Zili Wang, Ziqiang Cao, Wenjie Li
- Abstract summary: We propose a prefix-based pretraining strategy for few-shot learning in query-focused summarization.
With only a small amount of trainable parameters, prefix-merging outperforms fine-tuning on query-focused summarization.
- Score: 10.572282987037353
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Query-focused summarization has been considered as an important extension for
text summarization. It aims to generate a concise highlight for a given query.
Different from text summarization, query-focused summarization has long been
plagued by the problem of lacking high-quality large-scale datasets. In this
paper, we investigate the idea that whether we can integrate and transfer the
knowledge of text summarization and question answering to assist the few-shot
learning in query-focused summarization. Here, we propose prefix-merging, a
prefix-based pretraining strategy for few-shot learning in query-focused
summarization. Drawn inspiration from prefix-tuning, we are allowed to
integrate the task knowledge from text summarization and question answering
into a properly designed prefix and apply the merged prefix to query-focused
summarization. With only a small amount of trainable parameters, prefix-merging
outperforms fine-tuning on query-focused summarization. We further discuss the
influence of different prefix designs and propose a visualized explanation for
how prefix-merging works.
Related papers
- Thesis: Document Summarization with applications to Keyword extraction and Image Retrieval [0.0]
We propose a set of submodular functions for opinion summarization.
Opinion summarization has built in it the tasks of summarization and sentiment detection.
Our functions generate summaries such as there is good correlation between document sentiment and summary sentiment along with good ROUGE score.
arXiv Detail & Related papers (2024-05-20T21:27:18Z) - QFMTS: Generating Query-Focused Summaries over Multi-Table Inputs [63.98556480088152]
Table summarization is a crucial task aimed at condensing information into concise and comprehensible textual summaries.
We propose a novel method to address these limitations by introducing query-focused multi-table summarization.
Our approach, which comprises a table serialization module, a summarization controller, and a large language model, generates query-dependent table summaries tailored to users' information needs.
arXiv Detail & Related papers (2024-05-08T15:05:55Z) - Investigating Consistency in Query-Based Meeting Summarization: A
Comparative Study of Different Embedding Methods [0.0]
Text Summarization is one of famous applications in Natural Language Processing (NLP) field.
It aims to automatically generate summary with important information based on a given context.
In this paper, we are inspired by "QMSum: A New Benchmark for Query-based Multi-domain Meeting Summarization" proposed by Microsoft.
We also propose our Locater model designed to extract relevant spans based on given transcript and query, which are then summarized by Summarizer model.
arXiv Detail & Related papers (2024-02-10T08:25:30Z) - Aspect-based Meeting Transcript Summarization: A Two-Stage Approach with
Weak Supervision on Sentence Classification [91.13086984529706]
Aspect-based meeting transcript summarization aims to produce multiple summaries.
Traditional summarization methods produce one summary mixing information of all aspects.
We propose a two-stage method for aspect-based meeting transcript summarization.
arXiv Detail & Related papers (2023-11-07T19:06:31Z) - Improving Query-Focused Meeting Summarization with Query-Relevant
Knowledge [71.14873115781366]
We propose a knowledge-enhanced two-stage framework called Knowledge-Aware Summarizer (KAS) to tackle the challenges.
In the first stage, we introduce knowledge-aware scores to improve the query-relevant segment extraction.
In the second stage, we incorporate query-relevant knowledge in the summary generation.
arXiv Detail & Related papers (2023-09-05T10:26:02Z) - Query-Utterance Attention with Joint modeling for Query-Focused Meeting
Summarization [4.763356598070365]
We propose a query-aware framework with joint modeling token and utterance based on Query-Utterance Attention.
We show that the query relevance of different granularities contributes to generating a summary more related to the query.
arXiv Detail & Related papers (2023-03-08T10:21:45Z) - AnswerSumm: A Manually-Curated Dataset and Pipeline for Answer
Summarization [73.91543616777064]
Community Question Answering (CQA) fora such as Stack Overflow and Yahoo! Answers contain a rich resource of answers to a wide range of community-based questions.
One goal of answer summarization is to produce a summary that reflects the range of answer perspectives.
This work introduces a novel dataset of 4,631 CQA threads for answer summarization, curated by professional linguists.
arXiv Detail & Related papers (2021-11-11T21:48:02Z) - Aspect-Oriented Summarization through Query-Focused Extraction [23.62412515574206]
Real users' needs often fall more closely into aspects, broad topics in a dataset the user is interested in rather than specific queries.
We benchmark extractive query-focused training schemes, and propose a contrastive augmentation approach to train the model.
We evaluate on two aspect-oriented datasets and find this approach yields focused summaries, better than those from a generic summarization system.
arXiv Detail & Related papers (2021-10-15T18:06:21Z) - Text Summarization with Latent Queries [60.468323530248945]
We introduce LaQSum, the first unified text summarization system that learns Latent Queries from documents for abstractive summarization with any existing query forms.
Under a deep generative framework, our system jointly optimize a latent query model and a conditional language model, allowing users to plug-and-play queries of any type at test time.
Our system robustly outperforms strong comparison systems across summarization benchmarks with different query types, document settings, and target domains.
arXiv Detail & Related papers (2021-05-31T21:14:58Z) - Contextualized Rewriting for Text Summarization [10.666547385992935]
We formalized rewriting as a seq2seq problem with group alignments.
Results show that our approach significantly outperforms non-contextualized rewriting systems.
arXiv Detail & Related papers (2021-01-31T05:35:57Z) - Abstractive Query Focused Summarization with Query-Free Resources [60.468323530248945]
In this work, we consider the problem of leveraging only generic summarization resources to build an abstractive QFS system.
We propose Marge, a Masked ROUGE Regression framework composed of a novel unified representation for summaries and queries.
Despite learning from minimal supervision, our system achieves state-of-the-art results in the distantly supervised setting.
arXiv Detail & Related papers (2020-12-29T14:39:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.