OPERA:Operation-Pivoted Discrete Reasoning over Text
- URL: http://arxiv.org/abs/2204.14166v1
- Date: Fri, 29 Apr 2022 15:41:47 GMT
- Title: OPERA:Operation-Pivoted Discrete Reasoning over Text
- Authors: Yongwei Zhou, Junwei Bao, Chaoqun Duan, Haipeng Sun, Jiahui Liang,
Yifan Wang, Jing Zhao, Youzheng Wu, Xiaodong He, Tiejun Zhao
- Abstract summary: OPERA is an operation-pivoted discrete reasoning framework for machine reading comprehension.
It uses lightweight symbolic operations as neural modules to facilitate the reasoning ability and interpretability.
Experiments on both DROP and RACENum datasets show the reasoning ability of OPERA.
- Score: 33.36388276371693
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine reading comprehension (MRC) that requires discrete reasoning
involving symbolic operations, e.g., addition, sorting, and counting, is a
challenging task. According to this nature, semantic parsing-based methods
predict interpretable but complex logical forms. However, logical form
generation is nontrivial and even a little perturbation in a logical form will
lead to wrong answers. To alleviate this issue, multi-predictor -based methods
are proposed to directly predict different types of answers and achieve
improvements. However, they ignore the utilization of symbolic operations and
encounter a lack of reasoning ability and interpretability. To inherit the
advantages of these two types of methods, we propose OPERA, an
operation-pivoted discrete reasoning framework, where lightweight symbolic
operations (compared with logical forms) as neural modules are utilized to
facilitate the reasoning ability and interpretability. Specifically, operations
are first selected and then softly executed to simulate the answer reasoning
procedure. Extensive experiments on both DROP and RACENum datasets show the
reasoning ability of OPERA. Moreover, further analysis verifies its
interpretability.
Related papers
- H-STAR: LLM-driven Hybrid SQL-Text Adaptive Reasoning on Tables [56.73919743039263]
This paper introduces a novel algorithm that integrates both symbolic and semantic (textual) approaches in a two-stage process to address limitations.
Our experiments demonstrate that H-STAR significantly outperforms state-of-the-art methods across three question-answering (QA) and fact-verification datasets.
arXiv Detail & Related papers (2024-06-29T21:24:19Z) - Language Models as Compilers: Simulating Pseudocode Execution Improves Algorithmic Reasoning in Language Models [17.76252625790628]
This paper presents Think-and-Execute, a framework that decomposes the reasoning process of language models into two steps.
With extensive experiments on seven algorithmic reasoning tasks, we demonstrate the effectiveness of Think-and-Execute.
arXiv Detail & Related papers (2024-04-03T08:49:11Z) - LaRS: Latent Reasoning Skills for Chain-of-Thought Reasoning [61.7853049843921]
Chain-of-thought (CoT) prompting is a popular in-context learning approach for large language models (LLMs)
This paper introduces a new approach named Latent Reasoning Skills (LaRS) that employs unsupervised learning to create a latent space representation of rationales.
arXiv Detail & Related papers (2023-12-07T20:36:10Z) - Neuro-Symbolic Integration Brings Causal and Reliable Reasoning Proofs [95.07757789781213]
Two lines of approaches are adopted for complex reasoning with LLMs.
One line of work prompts LLMs with various reasoning structures, while the structural outputs can be naturally regarded as intermediate reasoning steps.
The other line of work adopt LLM-free declarative solvers to do the reasoning task, rendering higher reasoning accuracy but lacking interpretability due to the black-box nature of the solvers.
We present a simple extension to the latter line of work. Specifically, we showcase that the intermediate search logs generated by Prolog interpreters can be accessed and interpreted into human-readable reasoning.
arXiv Detail & Related papers (2023-11-16T11:26:21Z) - Language Models can be Logical Solvers [99.40649402395725]
We introduce LoGiPT, a novel language model that directly emulates the reasoning processes of logical solvers.
LoGiPT is fine-tuned on a newly constructed instruction-tuning dataset derived from revealing and refining the invisible reasoning process of deductive solvers.
arXiv Detail & Related papers (2023-11-10T16:23:50Z) - Representation Synthesis by Probabilistic Many-Valued Logic Operation in Self-Supervised Learning [9.339914898177186]
We propose a new self-supervised learning (SSL) method for representations that enable logic operations.
Our method can generate a representation that has the features of both representations or only those features common to both representations.
Experiments on image retrieval using MNIST and PascalVOC showed that the representations of our method can be operated by OR and AND operations.
arXiv Detail & Related papers (2023-09-08T06:24:44Z) - Prediction or Comparison: Toward Interpretable Qualitative Reasoning [16.02199526395448]
Current approaches use either semantics to transform natural language inputs into logical expressions or a "black-box" model to solve them in one step.
In this work, we categorize qualitative reasoning tasks into two types: prediction and comparison.
In particular, we adopt neural network modules trained in an end-to-end manner to simulate the two reasoning processes.
arXiv Detail & Related papers (2021-06-04T10:27:55Z) - Logic-Driven Context Extension and Data Augmentation for Logical
Reasoning of Text [65.24325614642223]
We propose to understand logical symbols and expressions in the text to arrive at the answer.
Based on such logical information, we put forward a context extension framework and a data augmentation algorithm.
Our method achieves the state-of-the-art performance, and both logic-driven context extension framework and data augmentation algorithm can help improve the accuracy.
arXiv Detail & Related papers (2021-05-08T10:09:36Z) - AR-LSAT: Investigating Analytical Reasoning of Text [57.1542673852013]
We study the challenge of analytical reasoning of text and introduce a new dataset consisting of questions from the Law School Admission Test from 1991 to 2016.
We analyze what knowledge understanding and reasoning abilities are required to do well on this task.
arXiv Detail & Related papers (2021-04-14T02:53:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.