Dynamic Prefix-Tuning for Generative Template-based Event Extraction
- URL: http://arxiv.org/abs/2205.06166v1
- Date: Thu, 12 May 2022 15:38:34 GMT
- Title: Dynamic Prefix-Tuning for Generative Template-based Event Extraction
- Authors: Xiao Liu, Heyan Huang, Ge Shi, Bo Wang
- Abstract summary: We propose a generative template-based event extraction method with dynamic prefix (GTEE-DynPref)
Experimental results show that our model achieves competitive results with the state-of-the-art classification-based model OneIE on ACE 2005.
Our model is proven to be portable to new types of events effectively.
- Score: 31.581360683375337
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider event extraction in a generative manner with template-based
conditional generation. Although there is a rising trend of casting the task of
event extraction as a sequence generation problem with prompts, these
generation-based methods have two significant challenges, including using
suboptimal prompts and static event type information. In this paper, we propose
a generative template-based event extraction method with dynamic prefix
(GTEE-DynPref) by integrating context information with type-specific prefixes
to learn a context-specific prefix for each context. Experimental results show
that our model achieves competitive results with the state-of-the-art
classification-based model OneIE on ACE 2005 and achieves the best performances
on ERE. Additionally, our model is proven to be portable to new types of events
effectively.
Related papers
- Generative Context Distillation [48.91617280112579]
Generative Context Distillation (GCD) is a lightweight prompt internalization method that employs a joint training approach.
We demonstrate that our approach effectively internalizes complex prompts across various agent-based application scenarios.
arXiv Detail & Related papers (2024-11-24T17:32:20Z) - DEGAP: Dual Event-Guided Adaptive Prefixes for Templated-Based Event Argument Extraction with Slot Querying [32.115904077731386]
Recent advancements in event argument extraction (EAE) involve incorporating useful auxiliary information into models during training and inference.
These methods face two challenges: (1) the retrieval results may be irrelevant and (2) templates are developed independently for each event without considering their possible relationship.
We propose DEGAP to address these challenges through a simple yet effective components: dual prefixes, i.e. learnable prompt vectors, and an event-guided adaptive gating mechanism.
arXiv Detail & Related papers (2024-05-22T03:56:55Z) - Boosting Event Extraction with Denoised Structure-to-Text Augmentation [52.21703002404442]
Event extraction aims to recognize pre-defined event triggers and arguments from texts.
Recent data augmentation methods often neglect the problem of grammatical incorrectness.
We propose a denoised structure-to-text augmentation framework for event extraction DAEE.
arXiv Detail & Related papers (2023-05-16T16:52:07Z) - PILED: An Identify-and-Localize Framework for Few-Shot Event Detection [79.66042333016478]
In our study, we employ cloze prompts to elicit event-related knowledge from pretrained language models.
We minimize the number of type-specific parameters, enabling our model to quickly adapt to event detection tasks for new types.
arXiv Detail & Related papers (2022-02-15T18:01:39Z) - Event Data Association via Robust Model Fitting for Event-based Object Tracking [66.05728523166755]
We propose a novel Event Data Association (called EDA) approach to explicitly address the event association and fusion problem.
The proposed EDA seeks for event trajectories that best fit the event data, in order to perform unifying data association and information fusion.
The experimental results show the effectiveness of EDA under challenging scenarios, such as high speed, motion blur, and high dynamic range conditions.
arXiv Detail & Related papers (2021-10-25T13:56:00Z) - Event Extraction as Natural Language Generation [42.081626647997616]
Event extraction is usually formulated as a classification or structured prediction problem.
We propose GenEE, a model that not only captures complex dependencies within an event but also generalizes well to unseen or rare event types.
Empirical results show that our model achieves strong performance on event extraction tasks under all zero-shot, few-shot, and high-resource scenarios.
arXiv Detail & Related papers (2021-08-29T00:27:31Z) - Event Presence Prediction Helps Trigger Detection Across Languages [13.06818350795583]
We show that a Transformer based architecture can effectively model event extraction as a sequence labeling task.
We propose a combination of sentence level and token level training objectives that significantly boosts the performance of a BERT based event extraction model.
arXiv Detail & Related papers (2020-09-15T15:52:21Z) - Detecting Ongoing Events Using Contextual Word and Sentence Embeddings [110.83289076967895]
This paper introduces the Ongoing Event Detection (OED) task.
The goal is to detect ongoing event mentions only, as opposed to historical, future, hypothetical, or other forms or events that are neither fresh nor current.
Any application that needs to extract structured information about ongoing events from unstructured texts can take advantage of an OED system.
arXiv Detail & Related papers (2020-07-02T20:44:05Z) - Team RUC_AIM3 Technical Report at Activitynet 2020 Task 2: Exploring
Sequential Events Detection for Dense Video Captioning [63.91369308085091]
We propose a novel and simple model for event sequence generation and explore temporal relationships of the event sequence in the video.
The proposed model omits inefficient two-stage proposal generation and directly generates event boundaries conditioned on bi-directional temporal dependency in one pass.
The overall system achieves state-of-the-art performance on the dense-captioning events in video task with 9.894 METEOR score on the challenge testing set.
arXiv Detail & Related papers (2020-06-14T13:21:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.