Complex Reading Comprehension Through Question Decomposition
- URL: http://arxiv.org/abs/2211.03277v1
- Date: Mon, 7 Nov 2022 02:54:04 GMT
- Title: Complex Reading Comprehension Through Question Decomposition
- Authors: Xiao-Yu Guo, Yuan-Fang Li, and Gholamreza Haffari
- Abstract summary: We propose a novel learning approach that helps language models better understand difficult multi-hop questions.
Our model first learns to decompose each multi-hop question into several sub-questions by a trainable question decomposer.
We leverage a reading comprehension model to predict the answer in a sequence-to-sequence manner.
- Score: 48.256818683923626
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-hop reading comprehension requires not only the ability to reason over
raw text but also the ability to combine multiple evidence. We propose a novel
learning approach that helps language models better understand difficult
multi-hop questions and perform "complex, compositional" reasoning. Our model
first learns to decompose each multi-hop question into several sub-questions by
a trainable question decomposer. Instead of answering these sub-questions, we
directly concatenate them with the original question and context, and leverage
a reading comprehension model to predict the answer in a sequence-to-sequence
manner. By using the same language model for these two components, our best
seperate/unified t5-base variants outperform the baseline by 7.2/6.1 absolute
F1 points on a hard subset of DROP dataset.
Related papers
- STOC-TOT: Stochastic Tree-of-Thought with Constrained Decoding for Complex Reasoning in Multi-Hop Question Answering [8.525847131940031]
Multi-hop question answering (MHQA) requires a model to retrieve and integrate information from multiple passages to answer a complex question.
Recent systems leverage the power of large language models and integrate evidence retrieval with reasoning prompts.
We propose STOC-TOT, a tree-of-thought reasoning prompting method with constrained decoding for MHQA.
arXiv Detail & Related papers (2024-07-04T07:17:53Z) - HOLMES: Hyper-Relational Knowledge Graphs for Multi-hop Question Answering using LLMs [9.559336828884808]
Large Language Models (LLMs) are adept at answering simple (single-hop) questions.
As the complexity of the questions increase, the performance of LLMs degrades.
Recent methods try to reduce this burden by integrating structured knowledge triples into the raw text.
We propose to use a knowledge graph (KG) that is context-aware and is distilled to contain query-relevant information.
arXiv Detail & Related papers (2024-06-10T05:22:49Z) - Improving Question Generation with Multi-level Content Planning [70.37285816596527]
This paper addresses the problem of generating questions from a given context and an answer, specifically focusing on questions that require multi-hop reasoning across an extended context.
We propose MultiFactor, a novel QG framework based on multi-level content planning. Specifically, MultiFactor includes two components: FA-model, which simultaneously selects key phrases and generates full answers, and Q-model which takes the generated full answer as an additional input to generate questions.
arXiv Detail & Related papers (2023-10-20T13:57:01Z) - Successive Prompting for Decomposing Complex Questions [50.00659445976735]
Recent works leverage the capabilities of large language models (LMs) to perform complex question answering in a few-shot setting.
We introduce Successive Prompting'', where we iteratively break down a complex task into a simple task, solve it, and then repeat the process until we get the final solution.
Our best model (with successive prompting) achieves an improvement of 5% absolute F1 on a few-shot version of the DROP dataset.
arXiv Detail & Related papers (2022-12-08T06:03:38Z) - Locate Then Ask: Interpretable Stepwise Reasoning for Multi-hop Question
Answering [71.49131159045811]
Multi-hop reasoning requires aggregating multiple documents to answer a complex question.
Existing methods usually decompose the multi-hop question into simpler single-hop questions.
We propose an interpretable stepwise reasoning framework to incorporate both single-hop supporting sentence identification and single-hop question generation.
arXiv Detail & Related papers (2022-08-22T13:24:25Z) - Semantic Sentence Composition Reasoning for Multi-Hop Question Answering [1.773120658816994]
We present a semantic sentence composition reasoning approach for a multi-hop question answering task.
With the combination of factual sentences and multi-stage semantic retrieval, our approach can provide more comprehensive contextual information for model training and reasoning.
Experimental results demonstrate our model is able to incorporate existing pre-trained language models and outperform the existing SOTA method on the QASC task with an improvement of about 9%.
arXiv Detail & Related papers (2022-03-01T00:35:51Z) - Discrete Reasoning Templates for Natural Language Understanding [79.07883990966077]
We present an approach that reasons about complex questions by decomposing them to simpler subquestions.
We derive the final answer according to instructions in a predefined reasoning template.
We show that our approach is competitive with the state-of-the-art while being interpretable and requires little supervision.
arXiv Detail & Related papers (2021-04-05T18:56:56Z) - Coarse-grained decomposition and fine-grained interaction for multi-hop
question answering [5.88731657602706]
Lots of complex queries require multi-hop reasoning.
Bi-DAF generally captures only the surface semantics of words in complex questions.
We propose a new model architecture for multi-hop question answering.
arXiv Detail & Related papers (2021-01-15T06:56:34Z) - Text Modular Networks: Learning to Decompose Tasks in the Language of
Existing Models [61.480085460269514]
We propose a framework for building interpretable systems that learn to solve complex tasks by decomposing them into simpler ones solvable by existing models.
We use this framework to build ModularQA, a system that can answer multi-hop reasoning questions by decomposing them into sub-questions answerable by a neural factoid single-span QA model and a symbolic calculator.
arXiv Detail & Related papers (2020-09-01T23:45:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.