FaiRR: Faithful and Robust Deductive Reasoning over Natural Language
- URL: http://arxiv.org/abs/2203.10261v1
- Date: Sat, 19 Mar 2022 07:18:13 GMT
- Title: FaiRR: Faithful and Robust Deductive Reasoning over Natural Language
- Authors: Soumya Sanyal, Harman Singh, Xiang Ren
- Abstract summary: We frame the deductive logical reasoning task by defining three modular components: rule selection, fact selection, and knowledge composition.
We observe that FaiRR is robust to novel language perturbations, and is faster at inference than previous works on existing reasoning datasets.
- Score: 25.319674132967553
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Transformers have been shown to be able to perform deductive reasoning on a
logical rulebase containing rules and statements written in natural language.
Recent works show that such models can also produce the reasoning steps (i.e.,
the proof graph) that emulate the model's logical reasoning process. Currently,
these black-box models generate both the proof graph and intermediate
inferences within the same model and thus may be unfaithful. In this work, we
frame the deductive logical reasoning task by defining three modular
components: rule selection, fact selection, and knowledge composition. The rule
and fact selection steps select the candidate rule and facts to be used and
then the knowledge composition combines them to generate new inferences. This
ensures model faithfulness by assured causal relation from the proof step to
the inference reasoning. To test our framework, we propose FaiRR (Faithful and
Robust Reasoner) where the above three components are independently modeled by
transformers. We observe that FaiRR is robust to novel language perturbations,
and is faster at inference than previous works on existing reasoning datasets.
Additionally, in contrast to black-box generative models, the errors made by
FaiRR are more interpretable due to the modular approach.
Related papers
- QUITE: Quantifying Uncertainty in Natural Language Text in Bayesian Reasoning Scenarios [15.193544498311603]
We present QUITE, a dataset of real-world Bayesian reasoning scenarios with categorical random variables and complex relationships.
We conduct an extensive set of experiments, finding that logic-based models outperform out-of-the-box large language models on all reasoning types.
Our results provide evidence that neuro-symbolic models are a promising direction for improving complex reasoning.
arXiv Detail & Related papers (2024-10-14T12:44:59Z) - How Ambiguous are the Rationales for Natural Language Reasoning? A Simple Approach to Handling Rationale Uncertainty [0.0]
Rationales behind answers not only explain model decisions but boost language models to reason well on complex reasoning tasks.
It is non-trivial to estimate the degree to which the rationales are faithful enough to encourage model performance.
We propose how to deal with imperfect rationales causing aleatoric uncertainty.
arXiv Detail & Related papers (2024-02-22T07:12:34Z) - Neuro-Symbolic Integration Brings Causal and Reliable Reasoning Proofs [95.07757789781213]
Two lines of approaches are adopted for complex reasoning with LLMs.
One line of work prompts LLMs with various reasoning structures, while the structural outputs can be naturally regarded as intermediate reasoning steps.
The other line of work adopt LLM-free declarative solvers to do the reasoning task, rendering higher reasoning accuracy but lacking interpretability due to the black-box nature of the solvers.
We present a simple extension to the latter line of work. Specifically, we showcase that the intermediate search logs generated by Prolog interpreters can be accessed and interpreted into human-readable reasoning.
arXiv Detail & Related papers (2023-11-16T11:26:21Z) - Abductive Commonsense Reasoning Exploiting Mutually Exclusive
Explanations [118.0818807474809]
Abductive reasoning aims to find plausible explanations for an event.
Existing approaches for abductive reasoning in natural language processing often rely on manually generated annotations for supervision.
This work proposes an approach for abductive commonsense reasoning that exploits the fact that only a subset of explanations is correct for a given context.
arXiv Detail & Related papers (2023-05-24T01:35:10Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - RobustLR: Evaluating Robustness to Logical Perturbation in Deductive
Reasoning [25.319674132967553]
Transformers have been shown to be able to perform deductive reasoning on a logical rulebase containing rules and statements written in English natural language.
We propose RobustLR to evaluate the robustness of these models to minimal logical edits in rulebases.
We find that the models trained in prior works do not perform consistently on the different perturbations in RobustLR.
arXiv Detail & Related papers (2022-05-25T09:23:50Z) - ProoFVer: Natural Logic Theorem Proving for Fact Verification [24.61301908217728]
We propose ProoFVer, a proof system for fact verification using natural logic.
The generation of proofs makes ProoFVer an explainable system.
We find that humans correctly simulate ProoFVer's decisions more often using the proofs.
arXiv Detail & Related papers (2021-08-25T17:23:04Z) - Abstract Reasoning via Logic-guided Generation [65.92805601327649]
Abstract reasoning, i.e., inferring complicated patterns from given observations, is a central building block of artificial general intelligence.
This paper aims to design a framework for the latter approach and bridge the gap between artificial and human intelligence.
We propose logic-guided generation (LoGe), a novel generative DNN framework that reduces abstract reasoning as an optimization problem in propositional logic.
arXiv Detail & Related papers (2021-07-22T07:28:24Z) - Causal Expectation-Maximisation [70.45873402967297]
We show that causal inference is NP-hard even in models characterised by polytree-shaped graphs.
We introduce the causal EM algorithm to reconstruct the uncertainty about the latent variables from data about categorical manifest variables.
We argue that there appears to be an unnoticed limitation to the trending idea that counterfactual bounds can often be computed without knowledge of the structural equations.
arXiv Detail & Related papers (2020-11-04T10:25:13Z) - Measuring Association Between Labels and Free-Text Rationales [60.58672852655487]
In interpretable NLP, we require faithful rationales that reflect the model's decision-making process for an explained instance.
We demonstrate that pipelines, existing models for faithful extractive rationalization on information-extraction style tasks, do not extend as reliably to "reasoning" tasks requiring free-text rationales.
We turn to models that jointly predict and rationalize, a class of widely used high-performance models for free-text rationalization whose faithfulness is not yet established.
arXiv Detail & Related papers (2020-10-24T03:40:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.