Event Extraction as Question Generation and Answering
- URL: http://arxiv.org/abs/2307.05567v1
- Date: Mon, 10 Jul 2023 01:46:15 GMT
- Title: Event Extraction as Question Generation and Answering
- Authors: Di Lu, Shihao Ran, Joel Tetreault, Alejandro Jaimes
- Abstract summary: Recent work on Event Extraction has reframed the task as Question Answering (QA)
We propose QGA-EE, which enables a Question Generation (QG) model to generate questions that incorporate rich contextual information instead of using fixed templates.
Experiments show that QGA-EE outperforms all prior single-task-based models on the ACE05 English dataset.
- Score: 72.04433206754489
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent work on Event Extraction has reframed the task as Question Answering
(QA), with promising results. The advantage of this approach is that it
addresses the error propagation issue found in traditional token-based
classification approaches by directly predicting event arguments without
extracting candidates first. However, the questions are typically based on
fixed templates and they rarely leverage contextual information such as
relevant arguments. In addition, prior QA-based approaches have difficulty
handling cases where there are multiple arguments for the same role. In this
paper, we propose QGA-EE, which enables a Question Generation (QG) model to
generate questions that incorporate rich contextual information instead of
using fixed templates. We also propose dynamic templates to assist the training
of QG model. Experiments show that QGA-EE outperforms all prior
single-task-based models on the ACE05 English dataset.
Related papers
- Asking and Answering Questions to Extract Event-Argument Structures [7.997025284201876]
This paper presents a question-answering approach to extract document-level event-argument structures.
We automatically ask and answer questions for each argument type an event may have.
We use a simple span-swapping technique, coreference resolution, and large language models to augment the training instances.
arXiv Detail & Related papers (2024-04-25T08:43:06Z) - An Empirical Comparison of LM-based Question and Answer Generation
Methods [79.31199020420827]
Question and answer generation (QAG) consists of generating a set of question-answer pairs given a context.
In this paper, we establish baselines with three different QAG methodologies that leverage sequence-to-sequence language model (LM) fine-tuning.
Experiments show that an end-to-end QAG model, which is computationally light at both training and inference times, is generally robust and outperforms other more convoluted approaches.
arXiv Detail & Related papers (2023-05-26T14:59:53Z) - Modeling What-to-ask and How-to-ask for Answer-unaware Conversational
Question Generation [30.086071993793823]
What-to-ask and how-to-ask are the two main challenges in the answer-unaware setting.
We present SG-CQG, a two-stage CQG framework.
arXiv Detail & Related papers (2023-05-04T18:06:48Z) - Retrieval-Augmented Generative Question Answering for Event Argument
Extraction [66.24622127143044]
We propose a retrieval-augmented generative QA model (R-GQA) for event argument extraction.
It retrieves the most similar QA pair and augments it as prompt to the current example's context, then decodes the arguments as answers.
Our approach outperforms substantially prior methods across various settings.
arXiv Detail & Related papers (2022-11-14T02:00:32Z) - Improving Unsupervised Question Answering via Summarization-Informed
Question Generation [47.96911338198302]
Question Generation (QG) is the task of generating a plausible question for a passage, answer> pair.
We make use of freely available news summary data, transforming declarative sentences into appropriate questions using dependency parsing, named entity recognition and semantic role labeling.
The resulting questions are then combined with the original news articles to train an end-to-end neural QG model.
arXiv Detail & Related papers (2021-09-16T13:08:43Z) - Harvesting and Refining Question-Answer Pairs for Unsupervised QA [95.9105154311491]
We introduce two approaches to improve unsupervised Question Answering (QA)
First, we harvest lexically and syntactically divergent questions from Wikipedia to automatically construct a corpus of question-answer pairs (named as RefQA)
Second, we take advantage of the QA model to extract more appropriate answers, which iteratively refines data over RefQA.
arXiv Detail & Related papers (2020-05-06T15:56:06Z) - Template-Based Question Generation from Retrieved Sentences for Improved
Unsupervised Question Answering [98.48363619128108]
We propose an unsupervised approach to training QA models with generated pseudo-training data.
We show that generating questions for QA training by applying a simple template on a related, retrieved sentence rather than the original context sentence improves downstream QA performance.
arXiv Detail & Related papers (2020-04-24T17:57:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.