Event-Keyed Summarization
- URL: http://arxiv.org/abs/2402.06973v1
- Date: Sat, 10 Feb 2024 15:32:53 GMT
- Title: Event-Keyed Summarization
- Authors: William Gantt and Alexander Martin and Pavlo Kuchmiichuk and Aaron
Steven White
- Abstract summary: Event-keyed summarization (EKS) is a novel task that marries traditional summarization and document-level event extraction.
We introduce a dataset for this task, MUCSUM, consisting of summaries of all events in the classic MUC-4 dataset.
We show that ablations that reduce EKS to traditional summarization or structure-to-text yield inferior summaries of target events.
- Score: 46.521305453350635
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We introduce event-keyed summarization (EKS), a novel task that marries
traditional summarization and document-level event extraction, with the goal of
generating a contextualized summary for a specific event, given a document and
an extracted event structure. We introduce a dataset for this task, MUCSUM,
consisting of summaries of all events in the classic MUC-4 dataset, along with
a set of baselines that comprises both pretrained LM standards in the
summarization literature, as well as larger frontier models. We show that
ablations that reduce EKS to traditional summarization or structure-to-text
yield inferior summaries of target events and that MUCSUM is a robust benchmark
for this task. Lastly, we conduct a human evaluation of both reference and
model summaries, and provide some detailed analysis of the results.
Related papers
- Cross-Document Event-Keyed Summarization [35.957271217461525]
We extend event-keyed summarization (EKS) to the cross-document setting (CDEKS)
We introduce SEAMUS, a high-quality dataset for CDEKS based on an expert reannotation of the FAMUS dataset for cross-document argument extraction.
We present a suite of baselines on SEAMUS, covering both smaller, fine-tuned models, as well as zero- and few-shot prompted LLMs, along with detailed ablations, and a human evaluation study.
arXiv Detail & Related papers (2024-10-18T18:09:45Z) - Write Summary Step-by-Step: A Pilot Study of Stepwise Summarization [48.57273563299046]
We propose the task of Stepwise Summarization, which aims to generate a new appended summary each time a new document is proposed.
The appended summary should not only summarize the newly added content but also be coherent with the previous summary.
We show that SSG achieves state-of-the-art performance in terms of both automatic metrics and human evaluations.
arXiv Detail & Related papers (2024-06-08T05:37:26Z) - Follow the Timeline! Generating Abstractive and Extractive Timeline
Summary in Chronological Order [78.46986998674181]
We propose a Unified Timeline Summarizer (UTS) that can generate abstractive and extractive timeline summaries in time order.
We augment the previous Chinese large-scale timeline summarization dataset and collect a new English timeline dataset.
UTS achieves state-of-the-art performance in terms of both automatic and human evaluations.
arXiv Detail & Related papers (2023-01-02T20:29:40Z) - UniSumm and SummZoo: Unified Model and Diverse Benchmark for Few-Shot
Summarization [54.59104881168188]
textscUniSumm is a unified few-shot summarization model pre-trained with multiple summarization tasks.
textscSummZoo is a new benchmark to better evaluate few-shot summarizers.
arXiv Detail & Related papers (2022-11-17T18:54:47Z) - Zero-Shot On-the-Fly Event Schema Induction [61.91468909200566]
We present a new approach in which large language models are utilized to generate source documents that allow predicting, given a high-level event definition, the specific events, arguments, and relations between them.
Using our model, complete schemas on any topic can be generated on-the-fly without any manual data collection, i.e., in a zero-shot manner.
arXiv Detail & Related papers (2022-10-12T14:37:00Z) - Unsupervised Summarization with Customized Granularities [76.26899748972423]
We propose the first unsupervised multi-granularity summarization framework, GranuSum.
By inputting different numbers of events, GranuSum is capable of producing multi-granular summaries in an unsupervised manner.
arXiv Detail & Related papers (2022-01-29T05:56:35Z) - AgreeSum: Agreement-Oriented Multi-Document Summarization [3.4743618614284113]
Given a cluster of articles, the goal is to provide abstractive summaries that represent information common and faithful to all input articles.
We create a dataset for AgreeSum, and provide annotations on articlesummary entailment relations for a subset of the clusters in the dataset.
arXiv Detail & Related papers (2021-06-04T06:17:49Z) - WSL-DS: Weakly Supervised Learning with Distant Supervision for Query
Focused Multi-Document Abstractive Summarization [16.048329028104643]
In the Query Focused Multi-Document Summarization (QF-MDS) task, a set of documents and a query are given where the goal is to generate a summary from these documents.
One major challenge for this task is the lack of availability of labeled training datasets.
We propose a novel weakly supervised learning approach via utilizing distant supervision.
arXiv Detail & Related papers (2020-11-03T02:02:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.