Enhancing Document-level Event Argument Extraction with Contextual Clues
and Role Relevance
- URL: http://arxiv.org/abs/2310.05991v3
- Date: Fri, 20 Oct 2023 02:20:20 GMT
- Title: Enhancing Document-level Event Argument Extraction with Contextual Clues
and Role Relevance
- Authors: Wanlong Liu, Shaohuan Cheng, Dingyi Zeng, Hong Qu
- Abstract summary: Document-level event argument extraction poses new challenges of long input and cross-sentence inference.
We propose a Span-trigger-based Contextual Pooling and latent Role Guidance model.
- Score: 12.239459451494872
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Document-level event argument extraction poses new challenges of long input
and cross-sentence inference compared to its sentence-level counterpart.
However, most prior works focus on capturing the relations between candidate
arguments and the event trigger in each event, ignoring two crucial points: a)
non-argument contextual clue information; b) the relevance among argument
roles. In this paper, we propose a SCPRG (Span-trigger-based Contextual Pooling
and latent Role Guidance) model, which contains two novel and effective modules
for the above problem. The Span-Trigger-based Contextual Pooling(STCP)
adaptively selects and aggregates the information of non-argument clue words
based on the context attention weights of specific argument-trigger pairs from
pre-trained model. The Role-based Latent Information Guidance (RLIG) module
constructs latent role representations, makes them interact through
role-interactive encoding to capture semantic relevance, and merges them into
candidate arguments. Both STCP and RLIG introduce no more than 1% new
parameters compared with the base model and can be easily applied to other
event extraction models, which are compact and transplantable. Experiments on
two public datasets show that our SCPRG outperforms previous state-of-the-art
methods, with 1.13 F1 and 2.64 F1 improvements on RAMS and WikiEvents
respectively. Further analyses illustrate the interpretability of our model.
Related papers
- Utilizing Contextual Clues and Role Correlations for Enhancing Document-level Event Argument Extraction [14.684710634595866]
Document-level event argument extraction is a crucial yet challenging task within the field of information extraction.
Here, we introduce a novel framework named CARLG, comprising two innovative components: the Contextual Clues Aggregation (CCA) and the Role-based Latent Information Guidance (RLIG)
We then instantiate the CARLG framework into two variants based on two types of current mainstream EAE approaches. Notably, our CARLG framework introduces less than 1% new parameters yet significantly improving the performance.
arXiv Detail & Related papers (2023-10-08T11:09:16Z) - Global Constraints with Prompting for Zero-Shot Event Argument
Classification [49.84347224233628]
We propose to use global constraints with prompting to tackle event argument classification without any annotation and task-specific training.
A pre-trained language model scores the new passages, making the initial prediction.
Our novel prompt templates can easily adapt to all events and argument types without manual effort.
arXiv Detail & Related papers (2023-02-09T06:39:29Z) - RAAT: Relation-Augmented Attention Transformer for Relation Modeling in
Document-Level Event Extraction [16.87868728956481]
We propose a new DEE framework which can model the relation dependencies, called Relation-augmented Document-level Event Extraction (ReDEE)
To further leverage relation information, we introduce a separate event relation prediction task and adopt multi-task learning method to explicitly enhance event extraction performance.
arXiv Detail & Related papers (2022-06-07T15:11:42Z) - EA$^2$E: Improving Consistency with Event Awareness for Document-Level
Argument Extraction [52.43978926985928]
We introduce the Event-Aware Argument Extraction (EA$2$E) model with augmented context for training and inference.
Experiment results on WIKIEVENTS and ACE2005 datasets demonstrate the effectiveness of EA$2$E.
arXiv Detail & Related papers (2022-05-30T04:33:51Z) - CUP: Curriculum Learning based Prompt Tuning for Implicit Event Argument
Extraction [22.746071199667146]
Implicit event argument extraction (EAE) aims to identify arguments that could scatter over the document.
We propose a Curriculum learning based Prompt tuning (CUP) approach, which resolves implicit EAE by four learning stages.
In addition, we integrate a prompt-based encoder-decoder model to elicit related knowledge from pre-trained language models.
arXiv Detail & Related papers (2022-05-01T16:03:54Z) - Long Document Summarization with Top-down and Bottom-up Inference [113.29319668246407]
We propose a principled inference framework to improve summarization models on two aspects.
Our framework assumes a hierarchical latent structure of a document where the top-level captures the long range dependency.
We demonstrate the effectiveness of the proposed framework on a diverse set of summarization datasets.
arXiv Detail & Related papers (2022-03-15T01:24:51Z) - Capturing Event Argument Interaction via A Bi-Directional Entity-Level
Recurrent Decoder [7.60457018063735]
We formalize event argument extraction (EAE) as a Seq2Seq-like learning problem for the first time.
A neural architecture with a novel Bi-directional Entity-level Recurrent Decoder (BERD) is proposed to generate argument roles.
arXiv Detail & Related papers (2021-07-01T02:55:12Z) - Reinforcement Learning-based Dialogue Guided Event Extraction to Exploit
Argument Relations [70.35379323231241]
This paper presents a better approach for event extraction by explicitly utilizing the relationships of event arguments.
We employ reinforcement learning and incremental learning to extract multiple arguments via a multi-turned, iterative process.
Experimental results show that our approach consistently outperforms seven state-of-the-art event extraction methods.
arXiv Detail & Related papers (2021-06-23T13:24:39Z) - Pairwise Representation Learning for Event Coreference [73.10563168692667]
We develop a Pairwise Representation Learning (PairwiseRL) scheme for the event mention pairs.
Our representation supports a finer, structured representation of the text snippet to facilitate encoding events and their arguments.
We show that PairwiseRL, despite its simplicity, outperforms the prior state-of-the-art event coreference systems on both cross-document and within-document event coreference benchmarks.
arXiv Detail & Related papers (2020-10-24T06:55:52Z) - High-order Semantic Role Labeling [86.29371274587146]
This paper introduces a high-order graph structure for the neural semantic role labeling model.
It enables the model to explicitly consider not only the isolated predicate-argument pairs but also the interaction between the predicate-argument pairs.
Experimental results on 7 languages of the CoNLL-2009 benchmark show that the high-order structural learning techniques are beneficial to the strong performing SRL models.
arXiv Detail & Related papers (2020-10-09T15:33:54Z) - Resource-Enhanced Neural Model for Event Argument Extraction [28.812507794694543]
Event argument extraction aims to identify the arguments of an event and classify the roles that those arguments play.
We propose a trigger-aware sequence encoder with several types of trigger-dependent sequence representations.
Experiments on the English ACE2005 benchmark show that our approach achieves a new state-of-the-art.
arXiv Detail & Related papers (2020-10-06T21:00:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.