Interpretable AMR-Based Question Decomposition for Multi-hop Question
Answering
- URL: http://arxiv.org/abs/2206.08486v1
- Date: Thu, 16 Jun 2022 23:46:33 GMT
- Title: Interpretable AMR-Based Question Decomposition for Multi-hop Question
Answering
- Authors: Zhenyun Deng, Yonghua Zhu, Yang Chen, Michael Witbrock, Patricia
Riddle
- Abstract summary: We propose a Question Decomposition method based on Abstract Meaning Representation (QDAMR) for multi-hop QA.
We decompose a multi-hop question into simpler sub-questions and answer them in order.
Experimental results on HotpotQA demonstrate that our approach is competitive for interpretable reasoning.
- Score: 12.35571328854374
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Effective multi-hop question answering (QA) requires reasoning over multiple
scattered paragraphs and providing explanations for answers. Most existing
approaches cannot provide an interpretable reasoning process to illustrate how
these models arrive at an answer. In this paper, we propose a Question
Decomposition method based on Abstract Meaning Representation (QDAMR) for
multi-hop QA, which achieves interpretable reasoning by decomposing a multi-hop
question into simpler sub-questions and answering them in order. Since
annotating the decomposition is expensive, we first delegate the complexity of
understanding the multi-hop question to an AMR parser. We then achieve the
decomposition of a multi-hop question via segmentation of the corresponding AMR
graph based on the required reasoning type. Finally, we generate sub-questions
using an AMR-to-Text generation model and answer them with an off-the-shelf QA
model. Experimental results on HotpotQA demonstrate that our approach is
competitive for interpretable reasoning and that the sub-questions generated by
QDAMR are well-formed, outperforming existing question-decomposition-based
multi-hop QA approaches.
Related papers
- GenDec: A robust generative Question-decomposition method for Multi-hop
reasoning [32.12904215053187]
Multi-hop QA involves step-by-step reasoning to answer complex questions.
Existing large language models'(LLMs) reasoning ability in multi-hop question answering remains exploration.
It is unclear whether LLMs follow a desired reasoning chain to reach the right final answer.
arXiv Detail & Related papers (2024-02-17T02:21:44Z) - Answering Questions by Meta-Reasoning over Multiple Chains of Thought [53.55653437903948]
We introduce Multi-Chain Reasoning (MCR), an approach which prompts large language models to meta-reason over multiple chains of thought.
MCR examines different reasoning chains, mixes information between them and selects the most relevant facts in generating an explanation and predicting the answer.
arXiv Detail & Related papers (2023-04-25T17:27:37Z) - Understanding and Improving Zero-shot Multi-hop Reasoning in Generative
Question Answering [85.79940770146557]
We decompose multi-hop questions into multiple corresponding single-hop questions.
We find marked inconsistency in QA models' answers on these pairs of ostensibly identical question chains.
When trained only on single-hop questions, models generalize poorly to multi-hop questions.
arXiv Detail & Related papers (2022-10-09T11:48:07Z) - Locate Then Ask: Interpretable Stepwise Reasoning for Multi-hop Question
Answering [71.49131159045811]
Multi-hop reasoning requires aggregating multiple documents to answer a complex question.
Existing methods usually decompose the multi-hop question into simpler single-hop questions.
We propose an interpretable stepwise reasoning framework to incorporate both single-hop supporting sentence identification and single-hop question generation.
arXiv Detail & Related papers (2022-08-22T13:24:25Z) - Modeling Multi-hop Question Answering as Single Sequence Prediction [88.72621430714985]
We propose a simple generative approach (PathFid) that extends the task beyond just answer generation.
PathFid explicitly models the reasoning process to resolve the answer for multi-hop questions.
Our experiments demonstrate that PathFid leads to strong performance gains on two multi-hop QA datasets.
arXiv Detail & Related papers (2022-05-18T21:57:59Z) - Calibrating Trust of Multi-Hop Question Answering Systems with
Decompositional Probes [14.302797773412543]
Multi-hop Question Answering (QA) is a challenging task since it requires an accurate aggregation of information from multiple context paragraphs.
Recent work in multi-hop QA has shown that performance can be boosted by first decomposing the questions into simpler, single-hop questions.
We show that decomposition is an effective form of probing QA systems as well as a promising approach to explanation generation.
arXiv Detail & Related papers (2022-04-16T01:03:36Z) - Ask to Understand: Question Generation for Multi-hop Question Answering [11.626390908264872]
Multi-hop Question Answering (QA) requires the machine to answer complex questions by finding scattering clues and reasoning from multiple documents.
We propose a novel method to complete multi-hop QA from the perspective of Question Generation (QG)
arXiv Detail & Related papers (2022-03-17T04:02:29Z) - Do Multi-Hop Question Answering Systems Know How to Answer the
Single-Hop Sub-Questions? [23.991872322492384]
We investigate whether top-performing models for multi-hop questions understand the underlying sub-questions like humans.
We show that multiple state-of-the-art multi-hop QA models fail to correctly answer a large portion of sub-questions.
Our work takes a step forward towards building a more explainable multi-hop QA system.
arXiv Detail & Related papers (2020-02-23T15:16:43Z) - Unsupervised Question Decomposition for Question Answering [102.56966847404287]
We propose an algorithm for One-to-N Unsupervised Sequence Sequence (ONUS) that learns to map one hard, multi-hop question to many simpler, single-hop sub-questions.
We show large QA improvements on HotpotQA over a strong baseline on the original, out-of-domain, and multi-hop dev sets.
arXiv Detail & Related papers (2020-02-22T19:40:35Z) - Break It Down: A Question Understanding Benchmark [79.41678884521801]
We introduce a Question Decomposition Representation Meaning (QDMR) for questions.
QDMR constitutes the ordered list of steps, expressed through natural language, that are necessary for answering a question.
We release the Break dataset, containing over 83K pairs of questions and their QDMRs.
arXiv Detail & Related papers (2020-01-31T11:04:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.