Answer Generation through Unified Memories over Multiple Passages
- URL: http://arxiv.org/abs/2004.13829v1
- Date: Wed, 22 Apr 2020 11:46:40 GMT
- Title: Answer Generation through Unified Memories over Multiple Passages
- Authors: Makoto Nakatsuji, Sohei Okui
- Abstract summary: Our method is called neural answer Generation through Unified Memories over Multiple Passages (GUM-MP)
First, it determines which tokens in the passages are matched to the question.
In particular, it investigates matches between tokens in positive passages, which are assigned to the question, and those in negative passages, which are not related to the question.
Finally, it encodes the token sequences with the above two matching results into unified memories in the passage encoders and learns the answer sequence by using an encoder-decoder with a multiple-pointer-generator mechanism.
- Score: 10.965065178451104
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine reading comprehension methods that generate answers by referring to
multiple passages for a question have gained much attention in AI and NLP
communities. The current methods, however, do not investigate the relationships
among multiple passages in the answer generation process, even though topics
correlated among the passages may be answer candidates. Our method, called
neural answer Generation through Unified Memories over Multiple Passages
(GUM-MP), solves this problem as follows. First, it determines which tokens in
the passages are matched to the question. In particular, it investigates
matches between tokens in positive passages, which are assigned to the
question, and those in negative passages, which are not related to the
question. Next, it determines which tokens in the passage are matched to other
passages assigned to the same question and at the same time it investigates the
topics in which they are matched. Finally, it encodes the token sequences with
the above two matching results into unified memories in the passage encoders
and learns the answer sequence by using an encoder-decoder with a
multiple-pointer-generator mechanism. As a result, GUM-MP can generate answers
by pointing to important tokens present across passages. Evaluations indicate
that GUM-MP generates much more accurate results than the current models do.
Related papers
- GenSco: Can Question Decomposition based Passage Alignment improve Question Answering? [1.5776201492893507]
"GenSco" is a novel approach of selecting passages based on the predicted decomposition of the multi-hop questions.
We evaluate on three broadly established multi-hop question answering datasets.
arXiv Detail & Related papers (2024-07-14T15:25:08Z) - Improving Question Generation with Multi-level Content Planning [70.37285816596527]
This paper addresses the problem of generating questions from a given context and an answer, specifically focusing on questions that require multi-hop reasoning across an extended context.
We propose MultiFactor, a novel QG framework based on multi-level content planning. Specifically, MultiFactor includes two components: FA-model, which simultaneously selects key phrases and generates full answers, and Q-model which takes the generated full answer as an additional input to generate questions.
arXiv Detail & Related papers (2023-10-20T13:57:01Z) - Automatic Generation of Multiple-Choice Questions [7.310488568715925]
We present two methods to tackle the challenge of QAP generations.
A deep-learning-based end-to-end question generation system based on T5 Transformer with Preprocessing and Postprocessing Pipelines.
A sequence-learning-based scheme to generate adequate QAPs via meta-sequence representations of sentences.
arXiv Detail & Related papers (2023-03-25T22:45:54Z) - Diverse Multi-Answer Retrieval with Determinantal Point Processes [11.925050407713597]
We propose a re-ranking based approach using Determinantal point processes utilizing BERT as kernels.
Results demonstrate that our re-ranking technique outperforms state-of-the-art method on the AmbigQA dataset.
arXiv Detail & Related papers (2022-11-29T08:54:05Z) - Modeling Multi-hop Question Answering as Single Sequence Prediction [88.72621430714985]
We propose a simple generative approach (PathFid) that extends the task beyond just answer generation.
PathFid explicitly models the reasoning process to resolve the answer for multi-hop questions.
Our experiments demonstrate that PathFid leads to strong performance gains on two multi-hop QA datasets.
arXiv Detail & Related papers (2022-05-18T21:57:59Z) - Multi-hop Question Generation with Graph Convolutional Network [58.31752179830959]
Multi-hop Question Generation (QG) aims to generate answer-related questions by aggregating and reasoning over multiple scattered evidence from different paragraphs.
We propose Multi-Hop volution Fusion Network for Question Generation (MulQG), which does context encoding in multiple hops.
Our proposed model is able to generate fluent questions with high completeness and outperforms the strongest baseline by 20.8% in the multi-hop evaluation.
arXiv Detail & Related papers (2020-10-19T06:15:36Z) - Tag and Correct: Question aware Open Information Extraction with
Two-stage Decoding [73.24783466100686]
Question Open IE takes question and passage as inputs, outputting an answer which contains a subject, a predicate, and one or more arguments.
The semistructured answer has two advantages which are more readable and falsifiable compared to span answer.
One is an extractive method which extracts candidate answers from the passage with the Open IE model, and ranks them by matching with questions.
The other is the generative method which uses a sequence to sequence model to generate answers directly.
arXiv Detail & Related papers (2020-09-16T00:58:13Z) - Composing Answer from Multi-spans for Reading Comprehension [77.32873012668783]
We present a novel method to generate answers for non-extraction machine reading comprehension (MRC) tasks.
The proposed method has a better performance on accurately generating long answers, and substantially outperforms two competitive typical one-span and Seq2Seq baseline decoders.
arXiv Detail & Related papers (2020-09-14T01:44:42Z) - Crossing Variational Autoencoders for Answer Retrieval [50.17311961755684]
Question-answer alignment and question/answer semantics are two important signals for learning the representations.
We propose to cross variational auto-encoders by generating questions with aligned answers and generating answers with aligned questions.
arXiv Detail & Related papers (2020-05-06T01:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.