Tag and Correct: Question aware Open Information Extraction with
Two-stage Decoding
- URL: http://arxiv.org/abs/2009.07406v1
- Date: Wed, 16 Sep 2020 00:58:13 GMT
- Title: Tag and Correct: Question aware Open Information Extraction with
Two-stage Decoding
- Authors: Martin Kuo, Yaobo Liang, Lei Ji, Nan Duan, Linjun Shou, Ming Gong,
Peng Chen
- Abstract summary: Question Open IE takes question and passage as inputs, outputting an answer which contains a subject, a predicate, and one or more arguments.
The semistructured answer has two advantages which are more readable and falsifiable compared to span answer.
One is an extractive method which extracts candidate answers from the passage with the Open IE model, and ranks them by matching with questions.
The other is the generative method which uses a sequence to sequence model to generate answers directly.
- Score: 73.24783466100686
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Question Aware Open Information Extraction (Question aware Open IE) takes
question and passage as inputs, outputting an answer tuple which contains a
subject, a predicate, and one or more arguments. Each field of answer is a
natural language word sequence and is extracted from the passage. The
semi-structured answer has two advantages which are more readable and
falsifiable compared to span answer. There are two approaches to solve this
problem. One is an extractive method which extracts candidate answers from the
passage with the Open IE model, and ranks them by matching with questions. It
fully uses the passage information at the extraction step, but the extraction
is independent to the question. The other one is the generative method which
uses a sequence to sequence model to generate answers directly. It combines the
question and passage as input at the same time, but it generates the answer
from scratch, which does not use the facts that most of the answer words come
from in the passage. To guide the generation by passage, we present a two-stage
decoding model which contains a tagging decoder and a correction decoder. At
the first stage, the tagging decoder will tag keywords from the passage. At the
second stage, the correction decoder will generate answers based on tagged
keywords. Our model could be trained end-to-end although it has two stages.
Compared to previous generative models, we generate better answers by
generating coarse to fine. We evaluate our model on WebAssertions (Yan et al.,
2018) which is a Question aware Open IE dataset. Our model achieves a BLEU
score of 59.32, which is better than previous generative methods.
Related papers
- FastFiD: Improve Inference Efficiency of Open Domain Question Answering via Sentence Selection [61.9638234358049]
FastFiD is a novel approach that executes sentence selection on encoded passages.
This aids in retaining valuable sentences while reducing the context length required for generating answers.
arXiv Detail & Related papers (2024-08-12T17:50:02Z) - Phrase Retrieval for Open-Domain Conversational Question Answering with
Conversational Dependency Modeling via Contrastive Learning [54.55643652781891]
Open-Domain Conversational Question Answering (ODConvQA) aims at answering questions through a multi-turn conversation.
We propose a method to directly predict answers with a phrase retrieval scheme for a sequence of words.
arXiv Detail & Related papers (2023-06-07T09:46:38Z) - Modeling What-to-ask and How-to-ask for Answer-unaware Conversational
Question Generation [30.086071993793823]
What-to-ask and how-to-ask are the two main challenges in the answer-unaware setting.
We present SG-CQG, a two-stage CQG framework.
arXiv Detail & Related papers (2023-05-04T18:06:48Z) - LIQUID: A Framework for List Question Answering Dataset Generation [17.86721740779611]
We propose LIQUID, an automated framework for generating list QA datasets from unlabeled corpora.
We first convert a passage from Wikipedia or PubMed into a summary and extract named entities from the summarized text as candidate answers.
We then create questions using an off-the-shelf question generator with the extracted entities and original passage.
Using our synthetic data, we significantly improve the performance of the previous best list QA models by exact-match F1 scores of 5.0 on MultiSpanQA, 1.9 on Quoref, and 2.8 averaged across three BioASQ benchmarks.
arXiv Detail & Related papers (2023-02-03T12:42:45Z) - ListReader: Extracting List-form Answers for Opinion Questions [18.50111430378249]
ListReader is a neural ex-tractive QA model for list-form answer.
In addition to learning the alignment between the question and content, we introduce a heterogeneous graph neural network.
Our model adopts a co-extraction setting that can extract either span- or sentence-level answers.
arXiv Detail & Related papers (2021-10-22T10:33:08Z) - Open Question Answering over Tables and Text [55.8412170633547]
In open question answering (QA), the answer to a question is produced by retrieving and then analyzing documents that might contain answers to the question.
Most open QA systems have considered only retrieving information from unstructured text.
We present a new large-scale dataset Open Table-and-Text Question Answering (OTT-QA) to evaluate performance on this task.
arXiv Detail & Related papers (2020-10-20T16:48:14Z) - Composing Answer from Multi-spans for Reading Comprehension [77.32873012668783]
We present a novel method to generate answers for non-extraction machine reading comprehension (MRC) tasks.
The proposed method has a better performance on accurately generating long answers, and substantially outperforms two competitive typical one-span and Seq2Seq baseline decoders.
arXiv Detail & Related papers (2020-09-14T01:44:42Z) - Crossing Variational Autoencoders for Answer Retrieval [50.17311961755684]
Question-answer alignment and question/answer semantics are two important signals for learning the representations.
We propose to cross variational auto-encoders by generating questions with aligned answers and generating answers with aligned questions.
arXiv Detail & Related papers (2020-05-06T01:59:13Z) - Fine-tuning Multi-hop Question Answering with Hierarchical Graph Network [0.0]
We present a two stage model for multi-hop question answering.
The first stage is a hierarchical graph network, which is used to reason over multi-hop question.
The second stage is a language model fine-tuning task.
arXiv Detail & Related papers (2020-04-20T09:34:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.