Efficient Retrieval of Temporal Event Sequences from Textual Descriptions
- URL: http://arxiv.org/abs/2410.14043v1
- Date: Thu, 17 Oct 2024 21:35:55 GMT
- Title: Efficient Retrieval of Temporal Event Sequences from Textual Descriptions
- Authors: Zefang Liu, Yinzhu Quan,
- Abstract summary: TPP-LLM-Embedding is a unified model for embedding and retrieving event sequences based on natural language descriptions.
Our model encodes both event types and times, generating a sequence-level representation through pooling.
TPP-LLM-Embedding enables efficient retrieval and demonstrates superior performance compared to baseline models across diverse datasets.
- Score: 0.0
- License:
- Abstract: Retrieving temporal event sequences from textual descriptions is essential for applications such as analyzing e-commerce behavior, monitoring social media activities, and tracking criminal incidents. In this paper, we introduce TPP-LLM-Embedding, a unified model for efficiently embedding and retrieving event sequences based on natural language descriptions. Built on the TPP-LLM framework, which integrates large language models with temporal point processes, our model encodes both event types and times, generating a sequence-level representation through pooling. Textual descriptions are embedded using the same architecture, ensuring a shared embedding space for both sequences and descriptions. We optimize a contrastive loss based on similarity between these embeddings, bringing matching pairs closer and separating non-matching ones. TPP-LLM-Embedding enables efficient retrieval and demonstrates superior performance compared to baseline models across diverse datasets.
Related papers
- Analyzing Temporal Complex Events with Large Language Models? A Benchmark towards Temporal, Long Context Understanding [57.62275091656578]
We refer to the complex events composed of many news articles over an extended period as Temporal Complex Event (TCE)
This paper proposes a novel approach using Large Language Models (LLMs) to systematically extract and analyze the event chain within TCE.
arXiv Detail & Related papers (2024-06-04T16:42:17Z) - Pretext Training Algorithms for Event Sequence Data [29.70078362944441]
This paper proposes a self-supervised pretext training framework tailored to event sequence data.
Our pretext tasks unlock foundational representations that are generalizable across different down-stream tasks.
arXiv Detail & Related papers (2024-02-16T01:25:21Z) - Background Summarization of Event Timelines [13.264991569806572]
We introduce the task of background news summarization, which complements each timeline update with a background summary of relevant preceding events.
We construct a dataset by merging existing timeline datasets and asking human annotators to write a background summary for each timestep of each news event.
We establish strong baseline performance using state-of-the-art summarization systems and propose a query-focused variant to generate background summaries.
arXiv Detail & Related papers (2023-10-24T21:30:15Z) - Prompt-augmented Temporal Point Process for Streaming Event Sequence [18.873915278172095]
We present a novel framework for continuous monitoring of a Neural Temporal Point Processes (TPP) model.
PromptTPP consistently achieves state-of-the-art performance across three real user behavior datasets.
arXiv Detail & Related papers (2023-10-08T03:41:16Z) - Follow the Timeline! Generating Abstractive and Extractive Timeline
Summary in Chronological Order [78.46986998674181]
We propose a Unified Timeline Summarizer (UTS) that can generate abstractive and extractive timeline summaries in time order.
We augment the previous Chinese large-scale timeline summarization dataset and collect a new English timeline dataset.
UTS achieves state-of-the-art performance in terms of both automatic and human evaluations.
arXiv Detail & Related papers (2023-01-02T20:29:40Z) - Zero-Shot On-the-Fly Event Schema Induction [61.91468909200566]
We present a new approach in which large language models are utilized to generate source documents that allow predicting, given a high-level event definition, the specific events, arguments, and relations between them.
Using our model, complete schemas on any topic can be generated on-the-fly without any manual data collection, i.e., in a zero-shot manner.
arXiv Detail & Related papers (2022-10-12T14:37:00Z) - CLIP-Event: Connecting Text and Images with Event Structures [123.31452120399827]
We propose a contrastive learning framework to enforce vision-language pretraining models.
We take advantage of text information extraction technologies to obtain event structural knowledge.
Experiments show that our zero-shot CLIP-Event outperforms the state-of-the-art supervised model in argument extraction.
arXiv Detail & Related papers (2022-01-13T17:03:57Z) - Integrating Deep Event-Level and Script-Level Information for Script
Event Prediction [60.67635412135681]
We propose a Transformer-based model, called MCPredictor, which integrates deep event-level and script-level information for script event prediction.
The experimental results on the widely-used New York Times corpus demonstrate the effectiveness and superiority of the proposed model.
arXiv Detail & Related papers (2021-09-24T07:37:32Z) - Conditional Generation of Temporally-ordered Event Sequences [29.44608199294757]
We present a conditional generation model capable of capturing event cooccurrence as well as temporality of event sequences.
This single model can address both temporal ordering, sorting a given sequence of events into the order they occurred, and event infilling, predicting new events which fit into a temporally-ordered sequence of existing ones.
arXiv Detail & Related papers (2020-12-31T18:10:18Z) - Team RUC_AIM3 Technical Report at Activitynet 2020 Task 2: Exploring
Sequential Events Detection for Dense Video Captioning [63.91369308085091]
We propose a novel and simple model for event sequence generation and explore temporal relationships of the event sequence in the video.
The proposed model omits inefficient two-stage proposal generation and directly generates event boundaries conditioned on bi-directional temporal dependency in one pass.
The overall system achieves state-of-the-art performance on the dense-captioning events in video task with 9.894 METEOR score on the challenge testing set.
arXiv Detail & Related papers (2020-06-14T13:21:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.