Learning to Reason Deductively: Math Word Problem Solving as Complex
Relation Extraction
- URL: http://arxiv.org/abs/2203.10316v1
- Date: Sat, 19 Mar 2022 12:37:16 GMT
- Title: Learning to Reason Deductively: Math Word Problem Solving as Complex
Relation Extraction
- Authors: Zhanming Jie, Jierui Li, Wei Lu
- Abstract summary: Solving math word problems requires deductive reasoning over the quantities in the text.
Recent research efforts mostly relied on sequence-to-sequence or sequence-to-tree models to generate expressions.
We propose a novel approach that presents explainable deductive reasoning steps to iteratively construct target expressions.
- Score: 10.721488421356053
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Solving math word problems requires deductive reasoning over the quantities
in the text. Various recent research efforts mostly relied on
sequence-to-sequence or sequence-to-tree models to generate mathematical
expressions without explicitly performing relational reasoning between
quantities in the given context. While empirically effective, such approaches
typically do not provide explanations for the generated expressions. In this
work, we view the task as a complex relation extraction problem, proposing a
novel approach that presents explainable deductive reasoning steps to
iteratively construct target expressions, where each step involves a primitive
operation over two quantities defining their relation. Through extensive
experiments on four benchmark datasets, we show that the proposed model
significantly outperforms existing strong baselines. We further demonstrate
that the deductive procedure not only presents more explainable steps but also
enables us to make more accurate predictions on questions that require more
complex reasoning.
Related papers
- H-STAR: LLM-driven Hybrid SQL-Text Adaptive Reasoning on Tables [56.73919743039263]
This paper introduces a novel algorithm that integrates both symbolic and semantic (textual) approaches in a two-stage process to address limitations.
Our experiments demonstrate that H-STAR significantly outperforms state-of-the-art methods across three question-answering (QA) and fact-verification datasets.
arXiv Detail & Related papers (2024-06-29T21:24:19Z) - The Impact of Reasoning Step Length on Large Language Models [40.546685248243534]
Chain of Thought (CoT) is significant in improving the reasoning abilities of large language models.
We investigate the correlation between the effectiveness of CoT and the length of reasoning steps in prompts.
arXiv Detail & Related papers (2024-01-10T04:37:38Z) - Deductive Verification of Chain-of-Thought Reasoning [22.79166959432764]
Large Language Models (LLMs) benefit from Chain-of-Thought prompting in performing various reasoning tasks.
While CoT allows models to produce more comprehensive reasoning processes, its emphasis on intermediate reasoning steps can inadvertently introduce hallucinations and accumulated errors.
We propose Natural Program, a natural language-based deductive reasoning format.
arXiv Detail & Related papers (2023-06-06T17:18:56Z) - Faith and Fate: Limits of Transformers on Compositionality [109.79516190693415]
We investigate the limits of transformer large language models across three representative compositional tasks.
These tasks require breaking problems down into sub-steps and synthesizing these steps into a precise answer.
Our empirical findings suggest that transformer LLMs solve compositional tasks by reducing multi-step compositional reasoning into linearized subgraph matching.
arXiv Detail & Related papers (2023-05-29T23:24:14Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - Learning to Reason With Relational Abstractions [65.89553417442049]
We study how to build stronger reasoning capability in language models using the idea of relational abstractions.
We find that models that are supplied with such sequences as prompts can solve tasks with a significantly higher accuracy.
arXiv Detail & Related papers (2022-10-06T00:27:50Z) - Rationale-Augmented Ensembles in Language Models [53.45015291520658]
We reconsider rationale-augmented prompting for few-shot in-context learning.
We identify rationale sampling in the output space as the key component to robustly improve performance.
We demonstrate that rationale-augmented ensembles achieve more accurate and interpretable results than existing prompting approaches.
arXiv Detail & Related papers (2022-07-02T06:20:57Z) - Pushing the Limits of Rule Reasoning in Transformers through Natural
Language Satisfiability [30.01308882849197]
We propose a new methodology for creating challenging algorithmic reasoning datasets.
Key idea is to draw insights from empirical sampling of hard propositional SAT problems and from complexity-theoretic studies of language.
We find that current transformers, given sufficient training data, are surprisingly robust at solving the resulting NLSat problems.
arXiv Detail & Related papers (2021-12-16T17:47:20Z) - Discrete Reasoning Templates for Natural Language Understanding [79.07883990966077]
We present an approach that reasons about complex questions by decomposing them to simpler subquestions.
We derive the final answer according to instructions in a predefined reasoning template.
We show that our approach is competitive with the state-of-the-art while being interpretable and requires little supervision.
arXiv Detail & Related papers (2021-04-05T18:56:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.