Modeling Complex Event Scenarios via Simple Entity-focused Questions
- URL: http://arxiv.org/abs/2302.07139v1
- Date: Tue, 14 Feb 2023 15:48:56 GMT
- Title: Modeling Complex Event Scenarios via Simple Entity-focused Questions
- Authors: Mahnaz Koupaee, Greg Durrett, Nathanael Chambers, Niranjan
Balasubramanian
- Abstract summary: We propose a question-guided generation framework that models events in complex scenarios as answers to questions about participants.
At any step in the generation process, the framework uses the previously generated events as context, but generates the next event as an answer to one of three questions.
Our empirical evaluation shows that this question-guided generation provides better coverage of participants, diverse events within a domain, and comparable perplexities for modeling event sequences.
- Score: 58.16787028844743
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Event scenarios are often complex and involve multiple event sequences
connected through different entity participants. Exploring such complex
scenarios requires an ability to branch through different sequences, something
that is difficult to achieve with standard event language modeling. To address
this, we propose a question-guided generation framework that models events in
complex scenarios as answers to questions about participants. At any step in
the generation process, the framework uses the previously generated events as
context, but generates the next event as an answer to one of three questions:
what else a participant did, what else happened to a participant, or what else
happened. The participants and the questions themselves can be sampled or be
provided as input from a user, allowing for controllable exploration. Our
empirical evaluation shows that this question-guided generation provides better
coverage of participants, diverse events within a domain, comparable
perplexities for modeling event sequences, and more effective control for
interactive schema generation.
Related papers
- Grounding Partially-Defined Events in Multimodal Data [61.0063273919745]
We introduce a multimodal formulation for partially-defined events and cast the extraction of these events as a three-stage span retrieval task.
We propose a benchmark for this task, MultiVENT-G, that consists of 14.5 hours of densely annotated current event videos and 1,168 text documents, containing 22.8K labeled event-centric entities.
Results illustrate the challenges that abstract event understanding poses and demonstrates promise in event-centric video-language systems.
arXiv Detail & Related papers (2024-10-07T17:59:48Z) - Prompt-based Graph Model for Joint Liberal Event Extraction and Event Schema Induction [1.3154296174423619]
Events are essential components of speech and texts, describing the changes in the state of entities.
The event extraction task aims to identify and classify events and find their participants according to event schemas.
The researchers propose Liberal Event Extraction (LEE), which aims to extract events and discover event schemas simultaneously.
arXiv Detail & Related papers (2024-03-19T07:56:42Z) - PESE: Event Structure Extraction using Pointer Network based
Encoder-Decoder Architecture [0.0]
Event extraction (EE) aims to find the events and event-related argument information from the text and represent them in a structured format.
In this paper, we represent each event record in a unique format that contains trigger phrase, trigger type, argument phrase, and corresponding role information.
Our proposed pointer network-based encoder-decoder model generates an event in each step by exploiting the interactions among event participants.
arXiv Detail & Related papers (2022-11-22T10:36:56Z) - Zero-Shot On-the-Fly Event Schema Induction [61.91468909200566]
We present a new approach in which large language models are utilized to generate source documents that allow predicting, given a high-level event definition, the specific events, arguments, and relations between them.
Using our model, complete schemas on any topic can be generated on-the-fly without any manual data collection, i.e., in a zero-shot manner.
arXiv Detail & Related papers (2022-10-12T14:37:00Z) - Unifying Event Detection and Captioning as Sequence Generation via
Pre-Training [53.613265415703815]
We propose a unified pre-training and fine-tuning framework to enhance the inter-task association between event detection and captioning.
Our model outperforms the state-of-the-art methods, and can be further boosted when pre-trained on extra large-scale video-text data.
arXiv Detail & Related papers (2022-07-18T14:18:13Z) - Integrating Deep Event-Level and Script-Level Information for Script
Event Prediction [60.67635412135681]
We propose a Transformer-based model, called MCPredictor, which integrates deep event-level and script-level information for script event prediction.
The experimental results on the widely-used New York Times corpus demonstrate the effectiveness and superiority of the proposed model.
arXiv Detail & Related papers (2021-09-24T07:37:32Z) - Toward Diverse Precondition Generation [15.021241299690226]
Precondition generation can be framed as a sequence-to-sequence problem.
In most real-world scenarios, an event can have several preconditions, requiring diverse generation.
We propose DiP, a Diverse Precondition generation system that can generate unique and diverse preconditions.
arXiv Detail & Related papers (2021-06-14T00:33:29Z) - Team RUC_AIM3 Technical Report at Activitynet 2020 Task 2: Exploring
Sequential Events Detection for Dense Video Captioning [63.91369308085091]
We propose a novel and simple model for event sequence generation and explore temporal relationships of the event sequence in the video.
The proposed model omits inefficient two-stage proposal generation and directly generates event boundaries conditioned on bi-directional temporal dependency in one pass.
The overall system achieves state-of-the-art performance on the dense-captioning events in video task with 9.894 METEOR score on the challenge testing set.
arXiv Detail & Related papers (2020-06-14T13:21:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.