Toward Diverse Precondition Generation
- URL: http://arxiv.org/abs/2106.07117v1
- Date: Mon, 14 Jun 2021 00:33:29 GMT
- Title: Toward Diverse Precondition Generation
- Authors: Heeyoung Kwon, Nathanael Chambers, and Niranjan Balasubramanian
- Abstract summary: Precondition generation can be framed as a sequence-to-sequence problem.
In most real-world scenarios, an event can have several preconditions, requiring diverse generation.
We propose DiP, a Diverse Precondition generation system that can generate unique and diverse preconditions.
- Score: 15.021241299690226
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Language understanding must identify the logical connections between events
in a discourse, but core events are often unstated due to their commonsense
nature. This paper fills in these missing events by generating precondition
events. Precondition generation can be framed as a sequence-to-sequence
problem: given a target event, generate a possible precondition. However, in
most real-world scenarios, an event can have several preconditions, requiring
diverse generation -- a challenge for standard seq2seq approaches. We propose
DiP, a Diverse Precondition generation system that can generate unique and
diverse preconditions. DiP uses a generative process with three components --
an event sampler, a candidate generator, and a post-processor. The event
sampler provides control codes (precondition triggers) which the candidate
generator uses to focus its generation. Unlike other conditional generation
systems, DiP automatically generates control codes without training on diverse
examples. Analysis against baselines reveals that DiP improves the diversity of
preconditions significantly while also generating more preconditions.
Related papers
- Distilling Event Sequence Knowledge From Large Language Models [17.105913216452738]
Event sequence models have been found to be highly effective in the analysis and prediction of events.
We use Large Language Models to generate event sequences that can effectively be used for probabilistic event model construction.
We show that our approach can generate high-quality event sequences, filling a knowledge gap in the input KG.
arXiv Detail & Related papers (2024-01-14T09:34:42Z) - Modeling Complex Event Scenarios via Simple Entity-focused Questions [58.16787028844743]
We propose a question-guided generation framework that models events in complex scenarios as answers to questions about participants.
At any step in the generation process, the framework uses the previously generated events as context, but generates the next event as an answer to one of three questions.
Our empirical evaluation shows that this question-guided generation provides better coverage of participants, diverse events within a domain, and comparable perplexities for modeling event sequences.
arXiv Detail & Related papers (2023-02-14T15:48:56Z) - Retrieval-Augmented Generative Question Answering for Event Argument
Extraction [66.24622127143044]
We propose a retrieval-augmented generative QA model (R-GQA) for event argument extraction.
It retrieves the most similar QA pair and augments it as prompt to the current example's context, then decodes the arguments as answers.
Our approach outperforms substantially prior methods across various settings.
arXiv Detail & Related papers (2022-11-14T02:00:32Z) - Towards Out-of-Distribution Sequential Event Prediction: A Causal
Treatment [72.50906475214457]
The goal of sequential event prediction is to estimate the next event based on a sequence of historical events.
In practice, the next-event prediction models are trained with sequential data collected at one time.
We propose a framework with hierarchical branching structures for learning context-specific representations.
arXiv Detail & Related papers (2022-10-24T07:54:13Z) - Unifying Event Detection and Captioning as Sequence Generation via
Pre-Training [53.613265415703815]
We propose a unified pre-training and fine-tuning framework to enhance the inter-task association between event detection and captioning.
Our model outperforms the state-of-the-art methods, and can be further boosted when pre-trained on extra large-scale video-text data.
arXiv Detail & Related papers (2022-07-18T14:18:13Z) - Neural Rule-Execution Tracking Machine For Transformer-Based Text
Generation [43.71069101841354]
Sequence-to-Sequence (S2S) neural text generation models have exhibited compelling performance on various natural language generation tasks.
However, the black-box nature of these models limits their application in tasks where specific rules need to be executed.
We propose a novel module named Neural Rule-Execution Tracking Machine that can be equipped into various transformer-based generators to leverage multiple rules simultaneously.
arXiv Detail & Related papers (2021-07-27T20:41:05Z) - Modeling Preconditions in Text with a Crowd-sourced Dataset [17.828175478279654]
This paper introduces PeKo, a crowd-sourced annotation of preconditions between event pairs in newswire.
We also introduce two challenge tasks aimed at modeling preconditions.
Evaluation on both tasks shows that modeling preconditions is challenging even for today's large language models.
arXiv Detail & Related papers (2020-10-06T01:52:34Z) - Conditional Hybrid GAN for Sequence Generation [56.67961004064029]
We propose a novel conditional hybrid GAN (C-Hybrid-GAN) to solve this issue.
We exploit the Gumbel-Softmax technique to approximate the distribution of discrete-valued sequences.
We demonstrate that the proposed C-Hybrid-GAN outperforms the existing methods in context-conditioned discrete-valued sequence generation.
arXiv Detail & Related papers (2020-09-18T03:52:55Z) - Team RUC_AIM3 Technical Report at Activitynet 2020 Task 2: Exploring
Sequential Events Detection for Dense Video Captioning [63.91369308085091]
We propose a novel and simple model for event sequence generation and explore temporal relationships of the event sequence in the video.
The proposed model omits inefficient two-stage proposal generation and directly generates event boundaries conditioned on bi-directional temporal dependency in one pass.
The overall system achieves state-of-the-art performance on the dense-captioning events in video task with 9.894 METEOR score on the challenge testing set.
arXiv Detail & Related papers (2020-06-14T13:21:37Z) - ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework
for Natural Language Generation [44.21363470798758]
ERNIE-GEN is an enhanced multi-flow sequence to sequence pre-training and fine-tuning framework.
It bridges the discrepancy between training and inference with an infilling generation mechanism and a noise-aware generation method.
It trains the model to predict semantically-complete spans consecutively rather than predicting word by word.
arXiv Detail & Related papers (2020-01-26T02:54:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.