Text2Event: Controllable Sequence-to-Structure Generation for End-to-end
Event Extraction
- URL: http://arxiv.org/abs/2106.09232v1
- Date: Thu, 17 Jun 2021 04:00:18 GMT
- Title: Text2Event: Controllable Sequence-to-Structure Generation for End-to-end
Event Extraction
- Authors: Yaojie Lu, Hongyu Lin, Jin Xu, Xianpei Han, Jialong Tang, Annan Li, Le
Sun, Meng Liao, Shaoyi Chen
- Abstract summary: Event extraction is challenging due to the complex structure of event records and the semantic gap between text and event.
Traditional methods usually extract event records by decomposing the complex structure prediction task into multiple subtasks.
We propose Text2Event, a sequence-to-structure generation paradigm that can directly extract events from the text in an end-to-end manner.
- Score: 35.39643772926177
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Event extraction is challenging due to the complex structure of event records
and the semantic gap between text and event. Traditional methods usually
extract event records by decomposing the complex structure prediction task into
multiple subtasks. In this paper, we propose Text2Event, a
sequence-to-structure generation paradigm that can directly extract events from
the text in an end-to-end manner. Specifically, we design a
sequence-to-structure network for unified event extraction, a constrained
decoding algorithm for event knowledge injection during inference, and a
curriculum learning algorithm for efficient model learning. Experimental
results show that, by uniformly modeling all tasks in a single model and
universally predicting different labels, our method can achieve competitive
performance using only record-level annotations in both supervised learning and
transfer learning settings.
Related papers
- Pretext Training Algorithms for Event Sequence Data [29.70078362944441]
This paper proposes a self-supervised pretext training framework tailored to event sequence data.
Our pretext tasks unlock foundational representations that are generalizable across different down-stream tasks.
arXiv Detail & Related papers (2024-02-16T01:25:21Z) - Towards Event Extraction from Speech with Contextual Clues [61.164413398231254]
We introduce the Speech Event Extraction (SpeechEE) task and construct three synthetic training sets and one human-spoken test set.
Compared to event extraction from text, SpeechEE poses greater challenges mainly due to complex speech signals that are continuous and have no word boundaries.
Our method brings significant improvements on all datasets, achieving a maximum F1 gain of 10.7%.
arXiv Detail & Related papers (2024-01-27T11:07:19Z) - Token-Event-Role Structure-based Multi-Channel Document-Level Event
Extraction [15.02043375212839]
This paper introduces a novel framework for document-level event extraction, incorporating a new data structure called token-event-role.
The proposed data structure enables our model to uncover the primary role of tokens in multiple events, facilitating a more comprehensive understanding of event relationships.
The results demonstrate that our approach outperforms the state-of-the-art method by 9.5 percentage points in terms of the F1 score.
arXiv Detail & Related papers (2023-06-30T15:22:57Z) - Joint Event Extraction via Structural Semantic Matching [12.248124072173935]
Event Extraction (EE) is one of the essential tasks in information extraction.
This paper encodes the semantic features of event types and makes structural matching with target text.
arXiv Detail & Related papers (2023-06-06T07:42:39Z) - Boosting Event Extraction with Denoised Structure-to-Text Augmentation [52.21703002404442]
Event extraction aims to recognize pre-defined event triggers and arguments from texts.
Recent data augmentation methods often neglect the problem of grammatical incorrectness.
We propose a denoised structure-to-text augmentation framework for event extraction DAEE.
arXiv Detail & Related papers (2023-05-16T16:52:07Z) - PESE: Event Structure Extraction using Pointer Network based
Encoder-Decoder Architecture [0.0]
Event extraction (EE) aims to find the events and event-related argument information from the text and represent them in a structured format.
In this paper, we represent each event record in a unique format that contains trigger phrase, trigger type, argument phrase, and corresponding role information.
Our proposed pointer network-based encoder-decoder model generates an event in each step by exploiting the interactions among event participants.
arXiv Detail & Related papers (2022-11-22T10:36:56Z) - Zero-Shot On-the-Fly Event Schema Induction [61.91468909200566]
We present a new approach in which large language models are utilized to generate source documents that allow predicting, given a high-level event definition, the specific events, arguments, and relations between them.
Using our model, complete schemas on any topic can be generated on-the-fly without any manual data collection, i.e., in a zero-shot manner.
arXiv Detail & Related papers (2022-10-12T14:37:00Z) - Unifying Event Detection and Captioning as Sequence Generation via
Pre-Training [53.613265415703815]
We propose a unified pre-training and fine-tuning framework to enhance the inter-task association between event detection and captioning.
Our model outperforms the state-of-the-art methods, and can be further boosted when pre-trained on extra large-scale video-text data.
arXiv Detail & Related papers (2022-07-18T14:18:13Z) - Integrating Deep Event-Level and Script-Level Information for Script
Event Prediction [60.67635412135681]
We propose a Transformer-based model, called MCPredictor, which integrates deep event-level and script-level information for script event prediction.
The experimental results on the widely-used New York Times corpus demonstrate the effectiveness and superiority of the proposed model.
arXiv Detail & Related papers (2021-09-24T07:37:32Z) - Learning Constraints and Descriptive Segmentation for Subevent Detection [74.48201657623218]
We propose an approach to learning and enforcing constraints that capture dependencies between subevent detection and EventSeg prediction.
We adopt Rectifier Networks for constraint learning and then convert the learned constraints to a regularization term in the loss function of the neural model.
arXiv Detail & Related papers (2021-09-13T20:50:37Z) - Document-level Event Extraction with Efficient End-to-end Learning of
Cross-event Dependencies [37.96254956540803]
We propose an end-to-end model leveraging Deep Value Networks (DVN), a structured prediction algorithm, to efficiently capture cross-event dependencies for document-level event extraction.
Our approach achieves comparable performance to CRF-based models on ACE05, while enjoys significantly higher computational efficiency.
arXiv Detail & Related papers (2020-10-24T05:28:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.