Integrating Deep Event-Level and Script-Level Information for Script
Event Prediction
- URL: http://arxiv.org/abs/2110.15706v1
- Date: Fri, 24 Sep 2021 07:37:32 GMT
- Title: Integrating Deep Event-Level and Script-Level Information for Script
Event Prediction
- Authors: Long Bai, Saiping Guan, Jiafeng Guo, Zixuan Li, Xiaolong Jin, Xueqi
Cheng
- Abstract summary: We propose a Transformer-based model, called MCPredictor, which integrates deep event-level and script-level information for script event prediction.
The experimental results on the widely-used New York Times corpus demonstrate the effectiveness and superiority of the proposed model.
- Score: 60.67635412135681
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Scripts are structured sequences of events together with the participants,
which are extracted from the texts.Script event prediction aims to predict the
subsequent event given the historical events in the script. Two kinds of
information facilitate this task, namely, the event-level information and the
script-level information. At the event level, existing studies view an event as
a verb with its participants, while neglecting other useful properties, such as
the state of the participants. At the script level, most existing studies only
consider a single event sequence corresponding to one common protagonist. In
this paper, we propose a Transformer-based model, called MCPredictor, which
integrates deep event-level and script-level information for script event
prediction. At the event level, MCPredictor utilizes the rich information in
the text to obtain more comprehensive event semantic representations. At the
script-level, it considers multiple event sequences corresponding to different
participants of the subsequent event. The experimental results on the
widely-used New York Times corpus demonstrate the effectiveness and superiority
of the proposed model.
Related papers
- PromptCL: Improving Event Representation via Prompt Template and Contrastive Learning [3.481567499804089]
We present PromptCL, a novel framework for event representation learning.
PromptCL elicits the capabilities of PLMs to comprehensively capture the semantics of short event texts.
Our experimental results demonstrate that PromptCL outperforms state-of-the-art baselines on event related tasks.
arXiv Detail & Related papers (2024-04-27T12:22:43Z) - Rich Event Modeling for Script Event Prediction [60.67635412135682]
We propose the Rich Event Prediction (REP) framework for script event prediction.
REP contains an event extractor to extract such information from texts.
The core component of the predictor is a transformer-based event encoder to flexibly deal with an arbitrary number of arguments.
arXiv Detail & Related papers (2022-12-16T05:17:59Z) - Unifying Event Detection and Captioning as Sequence Generation via
Pre-Training [53.613265415703815]
We propose a unified pre-training and fine-tuning framework to enhance the inter-task association between event detection and captioning.
Our model outperforms the state-of-the-art methods, and can be further boosted when pre-trained on extra large-scale video-text data.
arXiv Detail & Related papers (2022-07-18T14:18:13Z) - CLIP-Event: Connecting Text and Images with Event Structures [123.31452120399827]
We propose a contrastive learning framework to enforce vision-language pretraining models.
We take advantage of text information extraction technologies to obtain event structural knowledge.
Experiments show that our zero-shot CLIP-Event outperforms the state-of-the-art supervised model in argument extraction.
arXiv Detail & Related papers (2022-01-13T17:03:57Z) - Text2Event: Controllable Sequence-to-Structure Generation for End-to-end
Event Extraction [35.39643772926177]
Event extraction is challenging due to the complex structure of event records and the semantic gap between text and event.
Traditional methods usually extract event records by decomposing the complex structure prediction task into multiple subtasks.
We propose Text2Event, a sequence-to-structure generation paradigm that can directly extract events from the text in an end-to-end manner.
arXiv Detail & Related papers (2021-06-17T04:00:18Z) - proScript: Partially Ordered Scripts Generation via Pre-trained Language
Models [49.03193243699244]
We demonstrate for the first time that pre-trained neural language models (LMs) can be finetuned to generate high-quality scripts.
We collected a large (6.4k), crowdsourced partially ordered scripts (named proScript)
Our experiments show that our models perform well (e.g., F1=75.7 in task (i)), illustrating a new approach to overcoming previous barriers to script collection.
arXiv Detail & Related papers (2021-04-16T17:35:10Z) - Machine-Assisted Script Curation [7.063255210805794]
We describe Machine-Aided Script Curator (MASC), a system for human-machine collaborative script authoring.
MASC automates portions of the script creation process with suggestions for event types, links to Wikidata, and sub-events that may have been forgotten.
arXiv Detail & Related papers (2021-01-14T00:19:21Z) - Team RUC_AIM3 Technical Report at Activitynet 2020 Task 2: Exploring
Sequential Events Detection for Dense Video Captioning [63.91369308085091]
We propose a novel and simple model for event sequence generation and explore temporal relationships of the event sequence in the video.
The proposed model omits inefficient two-stage proposal generation and directly generates event boundaries conditioned on bi-directional temporal dependency in one pass.
The overall system achieves state-of-the-art performance on the dense-captioning events in video task with 9.894 METEOR score on the challenge testing set.
arXiv Detail & Related papers (2020-06-14T13:21:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.