Background Summarization of Event Timelines
- URL: http://arxiv.org/abs/2310.16197v1
- Date: Tue, 24 Oct 2023 21:30:15 GMT
- Title: Background Summarization of Event Timelines
- Authors: Adithya Pratapa, Kevin Small, Markus Dreyer
- Abstract summary: We introduce the task of background news summarization, which complements each timeline update with a background summary of relevant preceding events.
We construct a dataset by merging existing timeline datasets and asking human annotators to write a background summary for each timestep of each news event.
We establish strong baseline performance using state-of-the-art summarization systems and propose a query-focused variant to generate background summaries.
- Score: 13.264991569806572
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generating concise summaries of news events is a challenging natural language
processing task. While journalists often curate timelines to highlight key
sub-events, newcomers to a news event face challenges in catching up on its
historical context. In this paper, we address this need by introducing the task
of background news summarization, which complements each timeline update with a
background summary of relevant preceding events. We construct a dataset by
merging existing timeline datasets and asking human annotators to write a
background summary for each timestep of each news event. We establish strong
baseline performance using state-of-the-art summarization systems and propose a
query-focused variant to generate background summaries. To evaluate background
summary quality, we present a question-answering-based evaluation metric,
Background Utility Score (BUS), which measures the percentage of questions
about a current event timestep that a background summary answers. Our
experiments show the effectiveness of instruction fine-tuned systems such as
Flan-T5, in addition to strong zero-shot performance using GPT-3.5.
Related papers
- Analyzing Temporal Complex Events with Large Language Models? A Benchmark towards Temporal, Long Context Understanding [57.62275091656578]
We refer to the complex events composed of many news articles over an extended period as Temporal Complex Event (TCE)
This paper proposes a novel approach using Large Language Models (LLMs) to systematically extract and analyze the event chain within TCE.
arXiv Detail & Related papers (2024-06-04T16:42:17Z) - Beyond Trend and Periodicity: Guiding Time Series Forecasting with Textual Cues [9.053923035530152]
This work introduces a novel Text-Guided Time Series Forecasting (TGTSF) task.
By integrating textual cues, such as channel descriptions and dynamic news, TGTSF addresses the critical limitations of traditional methods.
We propose TGForecaster, a robust baseline model that fuses textual cues and time series data using cross-attention mechanisms.
arXiv Detail & Related papers (2024-05-22T10:45:50Z) - Text-Tuple-Table: Towards Information Integration in Text-to-Table Generation via Global Tuple Extraction [36.915250638481986]
We introduce LiveSum, a new benchmark dataset for generating summary tables of competitions based on real-time commentary texts.
We evaluate the performances of state-of-the-art Large Language Models on this task in both fine-tuning and zero-shot settings.
We additionally propose a novel pipeline called $T3$(Text-Tuple-Table) to improve their performances.
arXiv Detail & Related papers (2024-04-22T14:31:28Z) - SCTc-TE: A Comprehensive Formulation and Benchmark for Temporal Event Forecasting [63.01035584154509]
We develop a fully automated pipeline and construct a large-scale dataset named MidEast-TE from about 0.6 million news articles.
This dataset focuses on the cooperation and conflict events among countries mainly in the MidEast region from 2015 to 2022.
We propose a novel method LoGo that is able to take advantage of both Local and Global contexts for SCTc-TE forecasting.
arXiv Detail & Related papers (2023-12-02T07:40:21Z) - Exploring the Limits of Historical Information for Temporal Knowledge
Graph Extrapolation [59.417443739208146]
We propose a new event forecasting model based on a novel training framework of historical contrastive learning.
CENET learns both the historical and non-historical dependency to distinguish the most potential entities.
We evaluate our proposed model on five benchmark graphs.
arXiv Detail & Related papers (2023-08-29T03:26:38Z) - Follow the Timeline! Generating Abstractive and Extractive Timeline
Summary in Chronological Order [78.46986998674181]
We propose a Unified Timeline Summarizer (UTS) that can generate abstractive and extractive timeline summaries in time order.
We augment the previous Chinese large-scale timeline summarization dataset and collect a new English timeline dataset.
UTS achieves state-of-the-art performance in terms of both automatic and human evaluations.
arXiv Detail & Related papers (2023-01-02T20:29:40Z) - SumREN: Summarizing Reported Speech about Events in News [51.82314543729287]
We propose the novel task of summarizing the reactions of different speakers, as expressed by their reported statements, to a given event.
We create a new multi-document summarization benchmark, SUMREN, comprising 745 summaries of reported statements from various public figures.
arXiv Detail & Related papers (2022-12-02T12:51:39Z) - Detecting Ongoing Events Using Contextual Word and Sentence Embeddings [110.83289076967895]
This paper introduces the Ongoing Event Detection (OED) task.
The goal is to detect ongoing event mentions only, as opposed to historical, future, hypothetical, or other forms or events that are neither fresh nor current.
Any application that needs to extract structured information about ongoing events from unstructured texts can take advantage of an OED system.
arXiv Detail & Related papers (2020-07-02T20:44:05Z) - Examining the State-of-the-Art in News Timeline Summarization [10.16257074782054]
Previous work on automatic news timeline summarization (TLS) leaves an unclear picture about how this task can generally be approached and how well it is currently solved.
This is mostly due to the focus on individual subtasks, such as date selection and date summarization, and to the previous lack of appropriate evaluation metrics for the full TLS task.
In this paper, we compare different TLS strategies using appropriate evaluation frameworks, and propose a simple and effective combination of methods that improves over the state-of-the-art on all tested benchmarks.
arXiv Detail & Related papers (2020-05-20T15:06:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.