Bi-Directional Iterative Prompt-Tuning for Event Argument Extraction
- URL: http://arxiv.org/abs/2210.15843v1
- Date: Fri, 28 Oct 2022 02:31:59 GMT
- Title: Bi-Directional Iterative Prompt-Tuning for Event Argument Extraction
- Authors: Lu Dai and Bang Wang and Wei Xiang and Yijun Mo
- Abstract summary: We propose a bi-directional iterative prompt-tuning method for event argument extraction (EAE)
Our method explores event argument interactions by introducing the argument roles of contextual entities into prompt construction.
Experiments on the ACE 2005 English dataset with standard and low-resource settings show that the proposed method significantly outperforms the peer state-of-the-art methods.
- Score: 7.20903061029676
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, prompt-tuning has attracted growing interests in event argument
extraction (EAE). However, the existing prompt-tuning methods have not achieved
satisfactory performance due to the lack of consideration of entity
information. In this paper, we propose a bi-directional iterative prompt-tuning
method for EAE, where the EAE task is treated as a cloze-style task to take
full advantage of entity information and pre-trained language models (PLMs).
Furthermore, our method explores event argument interactions by introducing the
argument roles of contextual entities into prompt construction. Since template
and verbalizer are two crucial components in a cloze-style prompt, we propose
to utilize the role label semantic knowledge to construct a semantic verbalizer
and design three kinds of templates for the EAE task. Experiments on the ACE
2005 English dataset with standard and low-resource settings show that the
proposed method significantly outperforms the peer state-of-the-art methods.
Our code is available at https://github.com/HustMinsLab/BIP.
Related papers
- Learning Speech Representation From Contrastive Token-Acoustic
Pretraining [57.08426714676043]
We propose "Contrastive Token-Acoustic Pretraining (CTAP)", which uses two encoders to bring phoneme and speech into a joint multimodal space.
The proposed CTAP model is trained on 210k speech and phoneme pairs, achieving minimally-supervised TTS, VC, and ASR.
arXiv Detail & Related papers (2023-09-01T12:35:43Z) - SSP: Self-Supervised Post-training for Conversational Search [63.28684982954115]
We propose fullmodel (model) which is a new post-training paradigm with three self-supervised tasks to efficiently initialize the conversational search model.
To verify the effectiveness of our proposed method, we apply the conversational encoder post-trained by model on the conversational search task using two benchmark datasets: CAsT-19 and CAsT-20.
arXiv Detail & Related papers (2023-07-02T13:36:36Z) - Joint Event Extraction via Structural Semantic Matching [12.248124072173935]
Event Extraction (EE) is one of the essential tasks in information extraction.
This paper encodes the semantic features of event types and makes structural matching with target text.
arXiv Detail & Related papers (2023-06-06T07:42:39Z) - Automated Few-shot Classification with Instruction-Finetuned Language
Models [76.69064714392165]
We show that AuT-Few outperforms state-of-the-art few-shot learning methods.
We also show that AuT-Few is the best ranking method across datasets on the RAFT few-shot benchmark.
arXiv Detail & Related papers (2023-05-21T21:50:27Z) - How Does In-Context Learning Help Prompt Tuning? [55.78535874154915]
Fine-tuning large language models is becoming ever more impractical due to their rapidly-growing scale.
This motivates the use of parameter-efficient adaptation methods such as prompt tuning (PT), which adds a small number of tunable embeddings to an otherwise frozen model.
Recently, Singhal et al. (2022) propose instruction prompt tuning'' (IPT), which combines PT with ICL by concatenating a natural language demonstration with learned prompt embeddings.
arXiv Detail & Related papers (2023-02-22T17:45:12Z) - STPrompt: Semantic-guided and Task-driven prompts for Effective Few-shot
Classification [5.6205035780719275]
We propose the STPrompt -Semantic-guided and Task-driven Prompt model.
The proposed model achieves the state-of-the-art performance in five different datasets of few-shot text classification tasks.
arXiv Detail & Related papers (2022-10-29T04:42:30Z) - Instance-wise Prompt Tuning for Pretrained Language Models [72.74916121511662]
Instance-wise Prompt Tuning (IPT) is the first prompt learning paradigm that injects knowledge from the input data instances to the prompts.
IPT significantly outperforms task-based prompt learning methods, and achieves comparable performance to conventional finetuning with only 0.5% - 1.5% of tuned parameters.
arXiv Detail & Related papers (2022-06-04T10:08:50Z) - CUP: Curriculum Learning based Prompt Tuning for Implicit Event Argument
Extraction [22.746071199667146]
Implicit event argument extraction (EAE) aims to identify arguments that could scatter over the document.
We propose a Curriculum learning based Prompt tuning (CUP) approach, which resolves implicit EAE by four learning stages.
In addition, we integrate a prompt-based encoder-decoder model to elicit related knowledge from pre-trained language models.
arXiv Detail & Related papers (2022-05-01T16:03:54Z) - Capturing Event Argument Interaction via A Bi-Directional Entity-Level
Recurrent Decoder [7.60457018063735]
We formalize event argument extraction (EAE) as a Seq2Seq-like learning problem for the first time.
A neural architecture with a novel Bi-directional Entity-level Recurrent Decoder (BERD) is proposed to generate argument roles.
arXiv Detail & Related papers (2021-07-01T02:55:12Z) - Event Detection as Question Answering with Entity Information [5.761450181435801]
We propose a paradigm for the task of event detection (ED) by casting it as a question-answering (QA) problem with the possibility of multiple answers and the support of entities.
The extraction of event triggers is, thus, transformed into the task of identifying answer spans from a context, while also focusing on the surrounding entities.
Experiments on the ACE2005 corpus demonstrate that the proposed paradigm is a viable solution for the ED task and it significantly outperforms the state-of-the-art models.
arXiv Detail & Related papers (2021-04-14T16:53:11Z) - Detecting Ongoing Events Using Contextual Word and Sentence Embeddings [110.83289076967895]
This paper introduces the Ongoing Event Detection (OED) task.
The goal is to detect ongoing event mentions only, as opposed to historical, future, hypothetical, or other forms or events that are neither fresh nor current.
Any application that needs to extract structured information about ongoing events from unstructured texts can take advantage of an OED system.
arXiv Detail & Related papers (2020-07-02T20:44:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.