Integrating Deep Event-Level and Script-Level Information for Script
Event Prediction
- URL: http://arxiv.org/abs/2110.15706v1
- Date: Fri, 24 Sep 2021 07:37:32 GMT
- Title: Integrating Deep Event-Level and Script-Level Information for Script
Event Prediction
- Authors: Long Bai, Saiping Guan, Jiafeng Guo, Zixuan Li, Xiaolong Jin, Xueqi
Cheng
- Abstract summary: We propose a Transformer-based model, called MCPredictor, which integrates deep event-level and script-level information for script event prediction.
The experimental results on the widely-used New York Times corpus demonstrate the effectiveness and superiority of the proposed model.
- Score: 60.67635412135681
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Scripts are structured sequences of events together with the participants,
which are extracted from the texts.Script event prediction aims to predict the
subsequent event given the historical events in the script. Two kinds of
information facilitate this task, namely, the event-level information and the
script-level information. At the event level, existing studies view an event as
a verb with its participants, while neglecting other useful properties, such as
the state of the participants. At the script level, most existing studies only
consider a single event sequence corresponding to one common protagonist. In
this paper, we propose a Transformer-based model, called MCPredictor, which
integrates deep event-level and script-level information for script event
prediction. At the event level, MCPredictor utilizes the rich information in
the text to obtain more comprehensive event semantic representations. At the
script-level, it considers multiple event sequences corresponding to different
participants of the subsequent event. The experimental results on the
widely-used New York Times corpus demonstrate the effectiveness and superiority
of the proposed model.
Related papers
- Grounding Partially-Defined Events in Multimodal Data [61.0063273919745]
We introduce a multimodal formulation for partially-defined events and cast the extraction of these events as a three-stage span retrieval task.
We propose a benchmark for this task, MultiVENT-G, that consists of 14.5 hours of densely annotated current event videos and 1,168 text documents, containing 22.8K labeled event-centric entities.
Results illustrate the challenges that abstract event understanding poses and demonstrates promise in event-centric video-language systems.
arXiv Detail & Related papers (2024-10-07T17:59:48Z) - What Would Happen Next? Predicting Consequences from An Event Causality Graph [23.92119748794742]
Existing script event prediction task forcasts the subsequent event based on an event script chain.
This paper introduces a Causality Graph Event Prediction task that forecasting consequential event based on an Event Causality Graph (ECG)
arXiv Detail & Related papers (2024-09-26T02:34:08Z) - PromptCL: Improving Event Representation via Prompt Template and Contrastive Learning [3.481567499804089]
We present PromptCL, a novel framework for event representation learning.
PromptCL elicits the capabilities of PLMs to comprehensively capture the semantics of short event texts.
Our experimental results demonstrate that PromptCL outperforms state-of-the-art baselines on event related tasks.
arXiv Detail & Related papers (2024-04-27T12:22:43Z) - Rich Event Modeling for Script Event Prediction [60.67635412135682]
We propose the Rich Event Prediction (REP) framework for script event prediction.
REP contains an event extractor to extract such information from texts.
The core component of the predictor is a transformer-based event encoder to flexibly deal with an arbitrary number of arguments.
arXiv Detail & Related papers (2022-12-16T05:17:59Z) - CLIP-Event: Connecting Text and Images with Event Structures [123.31452120399827]
We propose a contrastive learning framework to enforce vision-language pretraining models.
We take advantage of text information extraction technologies to obtain event structural knowledge.
Experiments show that our zero-shot CLIP-Event outperforms the state-of-the-art supervised model in argument extraction.
arXiv Detail & Related papers (2022-01-13T17:03:57Z) - proScript: Partially Ordered Scripts Generation via Pre-trained Language
Models [49.03193243699244]
We demonstrate for the first time that pre-trained neural language models (LMs) can be finetuned to generate high-quality scripts.
We collected a large (6.4k), crowdsourced partially ordered scripts (named proScript)
Our experiments show that our models perform well (e.g., F1=75.7 in task (i)), illustrating a new approach to overcoming previous barriers to script collection.
arXiv Detail & Related papers (2021-04-16T17:35:10Z) - Machine-Assisted Script Curation [7.063255210805794]
We describe Machine-Aided Script Curator (MASC), a system for human-machine collaborative script authoring.
MASC automates portions of the script creation process with suggestions for event types, links to Wikidata, and sub-events that may have been forgotten.
arXiv Detail & Related papers (2021-01-14T00:19:21Z) - Team RUC_AIM3 Technical Report at Activitynet 2020 Task 2: Exploring
Sequential Events Detection for Dense Video Captioning [63.91369308085091]
We propose a novel and simple model for event sequence generation and explore temporal relationships of the event sequence in the video.
The proposed model omits inefficient two-stage proposal generation and directly generates event boundaries conditioned on bi-directional temporal dependency in one pass.
The overall system achieves state-of-the-art performance on the dense-captioning events in video task with 9.894 METEOR score on the challenge testing set.
arXiv Detail & Related papers (2020-06-14T13:21:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.