Pretext Training Algorithms for Event Sequence Data
- URL: http://arxiv.org/abs/2402.10392v1
- Date: Fri, 16 Feb 2024 01:25:21 GMT
- Title: Pretext Training Algorithms for Event Sequence Data
- Authors: Yimu Wang, He Zhao, Ruizhi Deng, Frederick Tung, Greg Mori
- Abstract summary: This paper proposes a self-supervised pretext training framework tailored to event sequence data.
Our pretext tasks unlock foundational representations that are generalizable across different down-stream tasks.
- Score: 29.70078362944441
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Pretext training followed by task-specific fine-tuning has been a successful
approach in vision and language domains. This paper proposes a self-supervised
pretext training framework tailored to event sequence data. We introduce a
novel alignment verification task that is specialized to event sequences,
building on good practices in masked reconstruction and contrastive learning.
Our pretext tasks unlock foundational representations that are generalizable
across different down-stream tasks, including next-event prediction for
temporal point process models, event sequence classification, and missing event
interpolation. Experiments on popular public benchmarks demonstrate the
potential of the proposed method across different tasks and data domains.
Related papers
- Uniting contrastive and generative learning for event sequences models [51.547576949425604]
This study investigates the integration of two self-supervised learning techniques - instance-wise contrastive learning and a generative approach based on restoring masked events in latent space.
Experiments conducted on several public datasets, focusing on sequence classification and next-event type prediction, show that the integrated method achieves superior performance compared to individual approaches.
arXiv Detail & Related papers (2024-08-19T13:47:17Z) - Unified Pretraining for Recommendation via Task Hypergraphs [55.98773629788986]
We propose a novel multitask pretraining framework named Unified Pretraining for Recommendation via Task Hypergraphs.
For a unified learning pattern to handle diverse requirements and nuances of various pretext tasks, we design task hypergraphs to generalize pretext tasks to hyperedge prediction.
A novel transitional attention layer is devised to discriminatively learn the relevance between each pretext task and recommendation.
arXiv Detail & Related papers (2023-10-20T05:33:21Z) - Event-Guided Procedure Planning from Instructional Videos with Text
Supervision [31.82121743586165]
We focus on the task of procedure planning from instructional videos with text supervision.
A critical challenge of this task is the large semantic gap between observed visual states and unobserved intermediate actions.
We propose a novel event-guided paradigm, which first infers events from the observed states and then plans out actions based on both the states and predicted events.
arXiv Detail & Related papers (2023-08-17T09:43:28Z) - Towards Out-of-Distribution Sequential Event Prediction: A Causal
Treatment [72.50906475214457]
The goal of sequential event prediction is to estimate the next event based on a sequence of historical events.
In practice, the next-event prediction models are trained with sequential data collected at one time.
We propose a framework with hierarchical branching structures for learning context-specific representations.
arXiv Detail & Related papers (2022-10-24T07:54:13Z) - Unifying Event Detection and Captioning as Sequence Generation via
Pre-Training [53.613265415703815]
We propose a unified pre-training and fine-tuning framework to enhance the inter-task association between event detection and captioning.
Our model outperforms the state-of-the-art methods, and can be further boosted when pre-trained on extra large-scale video-text data.
arXiv Detail & Related papers (2022-07-18T14:18:13Z) - Learning Constraints and Descriptive Segmentation for Subevent Detection [74.48201657623218]
We propose an approach to learning and enforcing constraints that capture dependencies between subevent detection and EventSeg prediction.
We adopt Rectifier Networks for constraint learning and then convert the learned constraints to a regularization term in the loss function of the neural model.
arXiv Detail & Related papers (2021-09-13T20:50:37Z) - Text2Event: Controllable Sequence-to-Structure Generation for End-to-end
Event Extraction [35.39643772926177]
Event extraction is challenging due to the complex structure of event records and the semantic gap between text and event.
Traditional methods usually extract event records by decomposing the complex structure prediction task into multiple subtasks.
We propose Text2Event, a sequence-to-structure generation paradigm that can directly extract events from the text in an end-to-end manner.
arXiv Detail & Related papers (2021-06-17T04:00:18Z) - A Deep Adversarial Model for Suffix and Remaining Time Prediction of
Event Sequences [12.200302768200503]
Event suffix and remaining time prediction are sequence to sequence learning tasks.
Recent deep learning-based works for such predictions are prone to potentially large prediction errors.
We propose an encoder-decoder architecture for open-loop training to advance the suffix and remaining time prediction of event sequences.
arXiv Detail & Related papers (2021-02-15T02:01:24Z) - Team RUC_AIM3 Technical Report at Activitynet 2020 Task 2: Exploring
Sequential Events Detection for Dense Video Captioning [63.91369308085091]
We propose a novel and simple model for event sequence generation and explore temporal relationships of the event sequence in the video.
The proposed model omits inefficient two-stage proposal generation and directly generates event boundaries conditioned on bi-directional temporal dependency in one pass.
The overall system achieves state-of-the-art performance on the dense-captioning events in video task with 9.894 METEOR score on the challenge testing set.
arXiv Detail & Related papers (2020-06-14T13:21:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.