Coarse-grained decomposition and fine-grained interaction for multi-hop
question answering
- URL: http://arxiv.org/abs/2101.05988v1
- Date: Fri, 15 Jan 2021 06:56:34 GMT
- Title: Coarse-grained decomposition and fine-grained interaction for multi-hop
question answering
- Authors: Xing Cao, Yun Liu
- Abstract summary: Lots of complex queries require multi-hop reasoning.
Bi-DAF generally captures only the surface semantics of words in complex questions.
We propose a new model architecture for multi-hop question answering.
- Score: 5.88731657602706
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances regarding question answering and reading comprehension have
resulted in models that surpass human performance when the answer is contained
in a single, continuous passage of text, requiring only single-hop reasoning.
However, in actual scenarios, lots of complex queries require multi-hop
reasoning. The key to the Question Answering task is semantic feature
interaction between documents and questions, which is widely processed by
Bi-directional Attention Flow (Bi-DAF), but Bi-DAF generally captures only the
surface semantics of words in complex questions and fails to capture implied
semantic feature of intermediate answers. As a result, Bi-DAF partially ignores
part of the contexts related to the question and cannot extract the most
important parts of multiple documents. In this paper we propose a new model
architecture for multi-hop question answering, by applying two completion
strategies: (1) Coarse-Grain complex question Decomposition (CGDe) strategy are
introduced to decompose complex question into simple ones under the condition
of without any additional annotations (2) Fine-Grained Interaction (FGIn)
strategy are introduced to better represent each word in the document and
extract more comprehensive and accurate sentences related to the inference
path. The above two strategies are combined and tested on the SQuAD and
HotpotQA datasets, and the experimental results show that our method
outperforms state-of-the-art baselines.
Related papers
- HOLMES: Hyper-Relational Knowledge Graphs for Multi-hop Question Answering using LLMs [9.559336828884808]
Large Language Models (LLMs) are adept at answering simple (single-hop) questions.
As the complexity of the questions increase, the performance of LLMs degrades.
Recent methods try to reduce this burden by integrating structured knowledge triples into the raw text.
We propose to use a knowledge graph (KG) that is context-aware and is distilled to contain query-relevant information.
arXiv Detail & Related papers (2024-06-10T05:22:49Z) - HPE:Answering Complex Questions over Text by Hybrid Question Parsing and
Execution [92.69684305578957]
We propose a framework of question parsing and execution on textual QA.
The proposed framework can be viewed as a top-down question parsing followed by a bottom-up answer backtracking.
Our experiments on MuSiQue, 2WikiQA, HotpotQA, and NQ show that the proposed parsing and hybrid execution framework outperforms existing approaches in supervised, few-shot, and zero-shot settings.
arXiv Detail & Related papers (2023-05-12T22:37:06Z) - Successive Prompting for Decomposing Complex Questions [50.00659445976735]
Recent works leverage the capabilities of large language models (LMs) to perform complex question answering in a few-shot setting.
We introduce Successive Prompting'', where we iteratively break down a complex task into a simple task, solve it, and then repeat the process until we get the final solution.
Our best model (with successive prompting) achieves an improvement of 5% absolute F1 on a few-shot version of the DROP dataset.
arXiv Detail & Related papers (2022-12-08T06:03:38Z) - Complex Reading Comprehension Through Question Decomposition [48.256818683923626]
We propose a novel learning approach that helps language models better understand difficult multi-hop questions.
Our model first learns to decompose each multi-hop question into several sub-questions by a trainable question decomposer.
We leverage a reading comprehension model to predict the answer in a sequence-to-sequence manner.
arXiv Detail & Related papers (2022-11-07T02:54:04Z) - Modeling Multi-hop Question Answering as Single Sequence Prediction [88.72621430714985]
We propose a simple generative approach (PathFid) that extends the task beyond just answer generation.
PathFid explicitly models the reasoning process to resolve the answer for multi-hop questions.
Our experiments demonstrate that PathFid leads to strong performance gains on two multi-hop QA datasets.
arXiv Detail & Related papers (2022-05-18T21:57:59Z) - Open Question Answering over Tables and Text [55.8412170633547]
In open question answering (QA), the answer to a question is produced by retrieving and then analyzing documents that might contain answers to the question.
Most open QA systems have considered only retrieving information from unstructured text.
We present a new large-scale dataset Open Table-and-Text Question Answering (OTT-QA) to evaluate performance on this task.
arXiv Detail & Related papers (2020-10-20T16:48:14Z) - Answering Any-hop Open-domain Questions with Iterative Document
Reranking [62.76025579681472]
We propose a unified QA framework to answer any-hop open-domain questions.
Our method consistently achieves performance comparable to or better than the state-of-the-art on both single-hop and multi-hop open-domain QA datasets.
arXiv Detail & Related papers (2020-09-16T04:31:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.