Follow the Timeline! Generating Abstractive and Extractive Timeline
Summary in Chronological Order
- URL: http://arxiv.org/abs/2301.00867v1
- Date: Mon, 2 Jan 2023 20:29:40 GMT
- Title: Follow the Timeline! Generating Abstractive and Extractive Timeline
Summary in Chronological Order
- Authors: Xiuying Chen, Mingzhe Li, Shen Gao, Zhangming Chan, Dongyan Zhao, Xin
Gao, Xiangliang Zhang, Rui Yan
- Abstract summary: We propose a Unified Timeline Summarizer (UTS) that can generate abstractive and extractive timeline summaries in time order.
We augment the previous Chinese large-scale timeline summarization dataset and collect a new English timeline dataset.
UTS achieves state-of-the-art performance in terms of both automatic and human evaluations.
- Score: 78.46986998674181
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Nowadays, time-stamped web documents related to a general news query floods
spread throughout the Internet, and timeline summarization targets concisely
summarizing the evolution trajectory of events along the timeline. Unlike
traditional document summarization, timeline summarization needs to model the
time series information of the input events and summarize important events in
chronological order. To tackle this challenge, in this paper, we propose a
Unified Timeline Summarizer (UTS) that can generate abstractive and extractive
timeline summaries in time order. Concretely, in the encoder part, we propose a
graph-based event encoder that relates multiple events according to their
content dependency and learns a global representation of each event. In the
decoder part, to ensure the chronological order of the abstractive summary, we
propose to extract the feature of event-level attention in its generation
process with sequential information remained and use it to simulate the
evolutionary attention of the ground truth summary. The event-level attention
can also be used to assist in extracting summary, where the extracted summary
also comes in time sequence. We augment the previous Chinese large-scale
timeline summarization dataset and collect a new English timeline dataset.
Extensive experiments conducted on these datasets and on the out-of-domain
Timeline 17 dataset show that UTS achieves state-of-the-art performance in
terms of both automatic and human evaluations.
Related papers
- TimeCAP: Learning to Contextualize, Augment, and Predict Time Series Events with Large Language Model Agents [52.13094810313054]
TimeCAP is a time-series processing framework that creatively employs Large Language Models (LLMs) as contextualizers of time series data.
TimeCAP incorporates two independent LLM agents: one generates a textual summary capturing the context of the time series, while the other uses this enriched summary to make more informed predictions.
Experimental results on real-world datasets demonstrate that TimeCAP outperforms state-of-the-art methods for time series event prediction.
arXiv Detail & Related papers (2025-02-17T04:17:27Z) - Language in the Flow of Time: Time-Series-Paired Texts Weaved into a Unified Temporal Narrative [65.84249211767921]
Texts as Time Series (TaTS) considers the time-series-paired texts to be auxiliary variables of the time series.
TaTS can be plugged into any existing numerical-only time series models and enable them to handle time series data with paired texts effectively.
arXiv Detail & Related papers (2025-02-13T03:43:27Z) - Unfolding the Headline: Iterative Self-Questioning for News Retrieval and Timeline Summarization [93.56166917491487]
This paper proposes CHRONOS - Causal Headline Retrieval for Open-domain News Timeline SummarizatiOn via Iterative Self-Questioning.
Our experiments indicate that CHRONOS is not only adept at open-domain timeline summarization, but it also rivals the performance of existing state-of-the-art systems designed for closed-domain applications.
arXiv Detail & Related papers (2025-01-01T16:28:21Z) - Retrieval of Temporal Event Sequences from Textual Descriptions [0.0]
We introduce TESRBench, a benchmark for temporal event sequence retrieval from textual descriptions.
We propose TPP-Embedding, a novel model for embedding and retrieving event sequences.
TPP-Embedding demonstrates superior performance over baseline models across TESRBench datasets.
arXiv Detail & Related papers (2024-10-17T21:35:55Z) - Event-Keyed Summarization [46.521305453350635]
Event-keyed summarization (EKS) is a novel task that marries traditional summarization and document-level event extraction.
We introduce a dataset for this task, MUCSUM, consisting of summaries of all events in the classic MUC-4 dataset.
We show that ablations that reduce EKS to traditional summarization or structure-to-text yield inferior summaries of target events.
arXiv Detail & Related papers (2024-02-10T15:32:53Z) - Background Summarization of Event Timelines [13.264991569806572]
We introduce the task of background news summarization, which complements each timeline update with a background summary of relevant preceding events.
We construct a dataset by merging existing timeline datasets and asking human annotators to write a background summary for each timestep of each news event.
We establish strong baseline performance using state-of-the-art summarization systems and propose a query-focused variant to generate background summaries.
arXiv Detail & Related papers (2023-10-24T21:30:15Z) - Zero-Shot On-the-Fly Event Schema Induction [61.91468909200566]
We present a new approach in which large language models are utilized to generate source documents that allow predicting, given a high-level event definition, the specific events, arguments, and relations between them.
Using our model, complete schemas on any topic can be generated on-the-fly without any manual data collection, i.e., in a zero-shot manner.
arXiv Detail & Related papers (2022-10-12T14:37:00Z) - CNTLS: A Benchmark Dataset for Abstractive or Extractive Chinese
Timeline Summarization [22.813746290856916]
We introduce the CNTLS dataset, a versatile resource for Chinese timeline summarization.
CNTLS encompasses 77 real-life topics, each with 2524 documents and summarizes nearly 60% days duration compression.
We evaluate the performance of various extractive and generative summarization systems on the CNTLS corpus.
arXiv Detail & Related papers (2021-05-29T03:47:10Z) - Screenplay Summarization Using Latent Narrative Structure [78.45316339164133]
We propose to explicitly incorporate the underlying structure of narratives into general unsupervised and supervised extractive summarization models.
We formalize narrative structure in terms of key narrative events (turning points) and treat it as latent in order to summarize screenplays.
Experimental results on the CSI corpus of TV screenplays, which we augment with scene-level summarization labels, show that latent turning points correlate with important aspects of a CSI episode.
arXiv Detail & Related papers (2020-04-27T11:54:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.