Follow the Timeline! Generating Abstractive and Extractive Timeline
Summary in Chronological Order
- URL: http://arxiv.org/abs/2301.00867v1
- Date: Mon, 2 Jan 2023 20:29:40 GMT
- Title: Follow the Timeline! Generating Abstractive and Extractive Timeline
Summary in Chronological Order
- Authors: Xiuying Chen, Mingzhe Li, Shen Gao, Zhangming Chan, Dongyan Zhao, Xin
Gao, Xiangliang Zhang, Rui Yan
- Abstract summary: We propose a Unified Timeline Summarizer (UTS) that can generate abstractive and extractive timeline summaries in time order.
We augment the previous Chinese large-scale timeline summarization dataset and collect a new English timeline dataset.
UTS achieves state-of-the-art performance in terms of both automatic and human evaluations.
- Score: 78.46986998674181
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Nowadays, time-stamped web documents related to a general news query floods
spread throughout the Internet, and timeline summarization targets concisely
summarizing the evolution trajectory of events along the timeline. Unlike
traditional document summarization, timeline summarization needs to model the
time series information of the input events and summarize important events in
chronological order. To tackle this challenge, in this paper, we propose a
Unified Timeline Summarizer (UTS) that can generate abstractive and extractive
timeline summaries in time order. Concretely, in the encoder part, we propose a
graph-based event encoder that relates multiple events according to their
content dependency and learns a global representation of each event. In the
decoder part, to ensure the chronological order of the abstractive summary, we
propose to extract the feature of event-level attention in its generation
process with sequential information remained and use it to simulate the
evolutionary attention of the ground truth summary. The event-level attention
can also be used to assist in extracting summary, where the extracted summary
also comes in time sequence. We augment the previous Chinese large-scale
timeline summarization dataset and collect a new English timeline dataset.
Extensive experiments conducted on these datasets and on the out-of-domain
Timeline 17 dataset show that UTS achieves state-of-the-art performance in
terms of both automatic and human evaluations.
Related papers
- Analyzing Temporal Complex Events with Large Language Models? A Benchmark towards Temporal, Long Context Understanding [57.62275091656578]
We refer to the complex events composed of many news articles over an extended period as Temporal Complex Event (TCE)
This paper proposes a novel approach using Large Language Models (LLMs) to systematically extract and analyze the event chain within TCE.
arXiv Detail & Related papers (2024-06-04T16:42:17Z) - Event-Keyed Summarization [46.521305453350635]
Event-keyed summarization (EKS) is a novel task that marries traditional summarization and document-level event extraction.
We introduce a dataset for this task, MUCSUM, consisting of summaries of all events in the classic MUC-4 dataset.
We show that ablations that reduce EKS to traditional summarization or structure-to-text yield inferior summaries of target events.
arXiv Detail & Related papers (2024-02-10T15:32:53Z) - Background Summarization of Event Timelines [13.264991569806572]
We introduce the task of background news summarization, which complements each timeline update with a background summary of relevant preceding events.
We construct a dataset by merging existing timeline datasets and asking human annotators to write a background summary for each timestep of each news event.
We establish strong baseline performance using state-of-the-art summarization systems and propose a query-focused variant to generate background summaries.
arXiv Detail & Related papers (2023-10-24T21:30:15Z) - Zero-Shot On-the-Fly Event Schema Induction [61.91468909200566]
We present a new approach in which large language models are utilized to generate source documents that allow predicting, given a high-level event definition, the specific events, arguments, and relations between them.
Using our model, complete schemas on any topic can be generated on-the-fly without any manual data collection, i.e., in a zero-shot manner.
arXiv Detail & Related papers (2022-10-12T14:37:00Z) - SQuALITY: Building a Long-Document Summarization Dataset the Hard Way [31.832673451018543]
We hire highly-qualified contractors to read stories and write original summaries from scratch.
To amortize reading time, we collect five summaries per document, with the first giving an overview and the subsequent four addressing specific questions.
Experiments with state-of-the-art summarization systems show that our dataset is challenging and that existing automatic evaluation metrics are weak indicators of quality.
arXiv Detail & Related papers (2022-05-23T17:02:07Z) - Unsupervised Summarization with Customized Granularities [76.26899748972423]
We propose the first unsupervised multi-granularity summarization framework, GranuSum.
By inputting different numbers of events, GranuSum is capable of producing multi-granular summaries in an unsupervised manner.
arXiv Detail & Related papers (2022-01-29T05:56:35Z) - CNTLS: A Benchmark Dataset for Abstractive or Extractive Chinese
Timeline Summarization [22.813746290856916]
We introduce the CNTLS dataset, a versatile resource for Chinese timeline summarization.
CNTLS encompasses 77 real-life topics, each with 2524 documents and summarizes nearly 60% days duration compression.
We evaluate the performance of various extractive and generative summarization systems on the CNTLS corpus.
arXiv Detail & Related papers (2021-05-29T03:47:10Z) - Abstractive Query Focused Summarization with Query-Free Resources [60.468323530248945]
In this work, we consider the problem of leveraging only generic summarization resources to build an abstractive QFS system.
We propose Marge, a Masked ROUGE Regression framework composed of a novel unified representation for summaries and queries.
Despite learning from minimal supervision, our system achieves state-of-the-art results in the distantly supervised setting.
arXiv Detail & Related papers (2020-12-29T14:39:35Z) - Screenplay Summarization Using Latent Narrative Structure [78.45316339164133]
We propose to explicitly incorporate the underlying structure of narratives into general unsupervised and supervised extractive summarization models.
We formalize narrative structure in terms of key narrative events (turning points) and treat it as latent in order to summarize screenplays.
Experimental results on the CSI corpus of TV screenplays, which we augment with scene-level summarization labels, show that latent turning points correlate with important aspects of a CSI episode.
arXiv Detail & Related papers (2020-04-27T11:54:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.