Discrete Reasoning Templates for Natural Language Understanding
- URL: http://arxiv.org/abs/2104.02115v1
- Date: Mon, 5 Apr 2021 18:56:56 GMT
- Title: Discrete Reasoning Templates for Natural Language Understanding
- Authors: Hadeel Al-Negheimish, Pranava Madhyastha, Alessandra Russo
- Abstract summary: We present an approach that reasons about complex questions by decomposing them to simpler subquestions.
We derive the final answer according to instructions in a predefined reasoning template.
We show that our approach is competitive with the state-of-the-art while being interpretable and requires little supervision.
- Score: 79.07883990966077
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reasoning about information from multiple parts of a passage to derive an
answer is an open challenge for reading-comprehension models. In this paper, we
present an approach that reasons about complex questions by decomposing them to
simpler subquestions that can take advantage of single-span extraction
reading-comprehension models, and derives the final answer according to
instructions in a predefined reasoning template. We focus on subtraction-based
arithmetic questions and evaluate our approach on a subset of the DROP dataset.
We show that our approach is competitive with the state-of-the-art while being
interpretable and requires little supervision
Related papers
- H-STAR: LLM-driven Hybrid SQL-Text Adaptive Reasoning on Tables [56.73919743039263]
This paper introduces a novel algorithm that integrates both symbolic and semantic (textual) approaches in a two-stage process to address limitations.
Our experiments demonstrate that H-STAR significantly outperforms state-of-the-art methods across three question-answering (QA) and fact-verification datasets.
arXiv Detail & Related papers (2024-06-29T21:24:19Z) - Optimizing Language Model's Reasoning Abilities with Weak Supervision [48.60598455782159]
We present textscPuzzleBen, a weakly supervised benchmark that comprises 25,147 complex questions, answers, and human-generated rationales.
A unique aspect of our dataset is the inclusion of 10,000 unannotated questions, enabling us to explore utilizing fewer supersized data to boost LLMs' inference capabilities.
arXiv Detail & Related papers (2024-05-07T07:39:15Z) - Evaluating the Rationale Understanding of Critical Reasoning in Logical
Reading Comprehension [13.896697187967547]
We crowdsource rationale texts that explain why we should select or eliminate answer options from a logical reading comprehension dataset.
Experiments show that recent large language models (e.g., InstructGPT) struggle to answer the subquestions even if they are able to answer the main questions correctly.
arXiv Detail & Related papers (2023-11-30T08:44:55Z) - Leveraging Structured Information for Explainable Multi-hop Question
Answering and Reasoning [14.219239732584368]
In this work, we investigate constructing and leveraging extracted semantic structures (graphs) for multi-hop question answering.
Empirical results and human evaluations show that our framework: generates more faithful reasoning chains and substantially improves the QA performance on two benchmark datasets.
arXiv Detail & Related papers (2023-11-07T05:32:39Z) - Elaborative Simplification as Implicit Questions Under Discussion [51.17933943734872]
This paper proposes to view elaborative simplification through the lens of the Question Under Discussion (QUD) framework.
We show that explicitly modeling QUD provides essential understanding of elaborative simplification and how the elaborations connect with the rest of the discourse.
arXiv Detail & Related papers (2023-05-17T17:26:16Z) - Successive Prompting for Decomposing Complex Questions [50.00659445976735]
Recent works leverage the capabilities of large language models (LMs) to perform complex question answering in a few-shot setting.
We introduce Successive Prompting'', where we iteratively break down a complex task into a simple task, solve it, and then repeat the process until we get the final solution.
Our best model (with successive prompting) achieves an improvement of 5% absolute F1 on a few-shot version of the DROP dataset.
arXiv Detail & Related papers (2022-12-08T06:03:38Z) - Summarize-then-Answer: Generating Concise Explanations for Multi-hop
Reading Comprehension [35.65149154213124]
We propose to generate a question-focused, abstractive summary of input paragraphs and then feed it to an RC system.
Given a limited amount of human-annotated abstractive explanations, we train the abstractive explainer in a semi-supervised manner.
Experiments demonstrate that the proposed abstractive explainer can generate more compact explanations than an extractive explainer with limited supervision.
arXiv Detail & Related papers (2021-09-14T17:44:34Z) - EviDR: Evidence-Emphasized Discrete Reasoning for Reasoning Machine
Reading Comprehension [39.970232108247394]
Reasoning machine reading comprehension (R-MRC) aims to answer complex questions that require discrete reasoning based on text.
Previous end-to-end methods that achieve state-of-the-art performance rarely solve the problem by paying enough emphasis on the modeling of evidence.
We propose an evidence-emphasized discrete reasoning approach (EviDR), in which sentence and clause level evidence is first detected based on distant supervision.
arXiv Detail & Related papers (2021-08-18T06:49:58Z) - Multi-hop Inference for Question-driven Summarization [39.08269647808958]
We propose a novel question-driven abstractive summarization method, Multi-hop Selective Generator (MSG)
MSG incorporates multi-hop reasoning into question-driven summarization and, meanwhile, provide justifications for the generated summaries.
Experimental results show that the proposed method consistently outperforms state-of-the-art methods on two non-factoid QA datasets.
arXiv Detail & Related papers (2020-10-08T02:36:39Z) - Text Modular Networks: Learning to Decompose Tasks in the Language of
Existing Models [61.480085460269514]
We propose a framework for building interpretable systems that learn to solve complex tasks by decomposing them into simpler ones solvable by existing models.
We use this framework to build ModularQA, a system that can answer multi-hop reasoning questions by decomposing them into sub-questions answerable by a neural factoid single-span QA model and a symbolic calculator.
arXiv Detail & Related papers (2020-09-01T23:45:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.