Learning to Ask for Data-Efficient Event Argument Extraction
- URL: http://arxiv.org/abs/2110.00479v1
- Date: Fri, 1 Oct 2021 15:22:37 GMT
- Title: Learning to Ask for Data-Efficient Event Argument Extraction
- Authors: Hongbin Ye, Ningyu Zhang, Zhen Bi, Shumin Deng, Chuanqi Tan, Hui Chen,
Fei Huang, Huajun Chen
- Abstract summary: Event argument extraction (EAE) is an important task for information extraction to discover specific argument roles.
In this study, we cast EAE as a question-based cloze task and empirically analyze fixed discrete token template performance.
We propose a novel approach called "Learning to Ask," which can learn optimized question templates for EAE without human annotations.
- Score: 23.106166629659405
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event argument extraction (EAE) is an important task for information
extraction to discover specific argument roles. In this study, we cast EAE as a
question-based cloze task and empirically analyze fixed discrete token template
performance. As generating human-annotated question templates is often
time-consuming and labor-intensive, we further propose a novel approach called
"Learning to Ask," which can learn optimized question templates for EAE without
human annotations. Experiments using the ACE-2005 dataset demonstrate that our
method based on optimized questions achieves state-of-the-art performance in
both the few-shot and supervised settings.
Related papers
- CRAFT Your Dataset: Task-Specific Synthetic Dataset Generation Through Corpus Retrieval and Augmentation [51.2289822267563]
We propose Corpus Retrieval and Augmentation for Fine-Tuning (CRAFT), a method for generating synthetic datasets.
We use large-scale public web-crawled corpora and similarity-based document retrieval to find other relevant human-written documents.
We demonstrate that CRAFT can efficiently generate large-scale task-specific training datasets for four diverse tasks.
arXiv Detail & Related papers (2024-09-03T17:54:40Z) - Prompt Optimization with EASE? Efficient Ordering-aware Automated Selection of Exemplars [66.823588073584]
Large language models (LLMs) have shown impressive capabilities in real-world applications.
The quality of these exemplars in the prompt greatly impacts performance.
Existing methods fail to adequately account for the impact of exemplar ordering on the performance.
arXiv Detail & Related papers (2024-05-25T08:23:05Z) - Optimizing Language Model's Reasoning Abilities with Weak Supervision [48.60598455782159]
We present textscPuzzleBen, a weakly supervised benchmark that comprises 25,147 complex questions, answers, and human-generated rationales.
A unique aspect of our dataset is the inclusion of 10,000 unannotated questions, enabling us to explore utilizing fewer supersized data to boost LLMs' inference capabilities.
arXiv Detail & Related papers (2024-05-07T07:39:15Z) - DACO: Towards Application-Driven and Comprehensive Data Analysis via Code Generation [83.30006900263744]
Data analysis is a crucial analytical process to generate in-depth studies and conclusive insights.
We propose to automatically generate high-quality answer annotations leveraging the code-generation capabilities of LLMs.
Our DACO-RL algorithm is evaluated by human annotators to produce more helpful answers than SFT model in 57.72% cases.
arXiv Detail & Related papers (2024-03-04T22:47:58Z) - Towards Model-Based Data Acquisition for Subjective Multi-Task NLP
Problems [12.38430125789305]
We propose a new model-based approach that allows the selection of tasks annotated individually for each text in a multi-task scenario.
Experiments carried out on three datasets, dozens of NLP tasks, and thousands of annotations show that our method allows up to 40% reduction in the number of annotations with negligible loss of knowledge.
arXiv Detail & Related papers (2023-12-13T15:03:27Z) - Event Extraction as Question Generation and Answering [72.04433206754489]
Recent work on Event Extraction has reframed the task as Question Answering (QA)
We propose QGA-EE, which enables a Question Generation (QG) model to generate questions that incorporate rich contextual information instead of using fixed templates.
Experiments show that QGA-EE outperforms all prior single-task-based models on the ACE05 English dataset.
arXiv Detail & Related papers (2023-07-10T01:46:15Z) - On Event Individuation for Document-Level Information Extraction [10.051706937866504]
We argue that the task demands definitive answers to thorny questions of event individuation.
We show that this raises concerns about the usefulness of template filling metrics, the quality of datasets for the task, and the ability of models to learn it.
arXiv Detail & Related papers (2022-12-19T18:30:36Z) - Bi-Directional Iterative Prompt-Tuning for Event Argument Extraction [7.20903061029676]
We propose a bi-directional iterative prompt-tuning method for event argument extraction (EAE)
Our method explores event argument interactions by introducing the argument roles of contextual entities into prompt construction.
Experiments on the ACE 2005 English dataset with standard and low-resource settings show that the proposed method significantly outperforms the peer state-of-the-art methods.
arXiv Detail & Related papers (2022-10-28T02:31:59Z) - Probing via Prompting [71.7904179689271]
This paper introduces a novel model-free approach to probing, by formulating probing as a prompting task.
We conduct experiments on five probing tasks and show that our approach is comparable or better at extracting information than diagnostic probes.
We then examine the usefulness of a specific linguistic property for pre-training by removing the heads that are essential to that property and evaluating the resulting model's performance on language modeling.
arXiv Detail & Related papers (2022-07-04T22:14:40Z) - CUP: Curriculum Learning based Prompt Tuning for Implicit Event Argument
Extraction [22.746071199667146]
Implicit event argument extraction (EAE) aims to identify arguments that could scatter over the document.
We propose a Curriculum learning based Prompt tuning (CUP) approach, which resolves implicit EAE by four learning stages.
In addition, we integrate a prompt-based encoder-decoder model to elicit related knowledge from pre-trained language models.
arXiv Detail & Related papers (2022-05-01T16:03:54Z) - Event Detection as Question Answering with Entity Information [5.761450181435801]
We propose a paradigm for the task of event detection (ED) by casting it as a question-answering (QA) problem with the possibility of multiple answers and the support of entities.
The extraction of event triggers is, thus, transformed into the task of identifying answer spans from a context, while also focusing on the surrounding entities.
Experiments on the ACE2005 corpus demonstrate that the proposed paradigm is a viable solution for the ED task and it significantly outperforms the state-of-the-art models.
arXiv Detail & Related papers (2021-04-14T16:53:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.