Formal Proofs as Structured Explanations: Proposing Several Tasks on Explainable Natural Language Inference
- URL: http://arxiv.org/abs/2311.08637v2
- Date: Thu, 06 Feb 2025 21:50:13 GMT
- Title: Formal Proofs as Structured Explanations: Proposing Several Tasks on Explainable Natural Language Inference
- Authors: Lasha Abzianidze,
- Abstract summary: We propose a reasoning framework that can model the reasoning process underlying natural language inferences.
The framework is based on the semantic tableau method, a well-studied proof system in formal logic.
We show how it can be used to define natural language reasoning tasks with structured explanations.
- Score: 0.16317061277457
- License:
- Abstract: In this position paper, we propose a reasoning framework that can model the reasoning process underlying natural language inferences. The framework is based on the semantic tableau method, a well-studied proof system in formal logic. Like the semantic tableau, the framework is driven by refutation -- something is proved if and only if its counterexample was not refuted. Despite being rooted in formal logic, the framework shares similarities with the mental models, a theory on the psychology of reasoning. We will show how the reasoning framework can facilitate the collection of comprehensive and structured explanations for existing naturalistic inference problems. To make the suggestion more concrete, we propose a method of semi-automatically obtaining structured explanations from the formal proofs of a reliable and high-performing logic-based inference system. Taking advantage of the in-depth information available in the generated formal proofs, we show how it can be used to define natural language reasoning tasks with structured explanations. The proposed tasks can be ordered according to difficulty defined in terms of the granularity of explanations. We argue that the tasks that contain a natural sketch of the proofs will suffer from substantially fewer shortcomings than the existing explainable reasoning tasks (or datasets).
Related papers
- Dialogue-based Explanations for Logical Reasoning using Structured Argumentation [0.06138671548064355]
We identify structural weaknesses of the state-of-the-art and propose a generic argumentation-based approach to address these problems.
Our work provides dialogue models as dialectic-proof procedures to compute and explain a query answer.
This allows us to construct dialectical proof trees as explanations, which are more expressive and arguably more intuitive than existing explanation formalisms.
arXiv Detail & Related papers (2025-02-16T22:26:18Z) - Lean-STaR: Learning to Interleave Thinking and Proving [53.923617816215774]
We present Lean-STaR, a framework for training language models to produce informal thoughts prior to each step of a proof.
Lean-STaR achieves state-of-the-art results on the miniF2F-test benchmark within the Lean theorem proving environment.
arXiv Detail & Related papers (2024-07-14T01:43:07Z) - Counterfactual and Semifactual Explanations in Abstract Argumentation: Formal Foundations, Complexity and Computation [19.799266797193344]
Argumentation-based systems often lack explainability while supporting decision-making processes.
Counterfactual and semifactual explanations are interpretability techniques.
We show that counterfactual and semifactual queries can be encoded in weak-constrained Argumentation Framework.
arXiv Detail & Related papers (2024-05-07T07:27:27Z) - Towards Generalizable and Faithful Logic Reasoning over Natural Language via Resolution Refutation [24.584926992534346]
We propose a novel framework, named Generalizable and Faithful Reasoner (GFaiR), which introduces the paradigm of resolution refutation.
Resolution refutation has the capability to solve all first-order logic reasoning problems by extending reasoning rules and employing the principle of proof by contradiction.
Our system outperforms previous works by achieving state-of-the-art performances in complex scenarios while maintaining performances in simple scenarios.
arXiv Detail & Related papers (2024-04-02T06:28:44Z) - Neuro-Symbolic Integration Brings Causal and Reliable Reasoning Proofs [95.07757789781213]
Two lines of approaches are adopted for complex reasoning with LLMs.
One line of work prompts LLMs with various reasoning structures, while the structural outputs can be naturally regarded as intermediate reasoning steps.
The other line of work adopt LLM-free declarative solvers to do the reasoning task, rendering higher reasoning accuracy but lacking interpretability due to the black-box nature of the solvers.
We present a simple extension to the latter line of work. Specifically, we showcase that the intermediate search logs generated by Prolog interpreters can be accessed and interpreted into human-readable reasoning.
arXiv Detail & Related papers (2023-11-16T11:26:21Z) - Stable Normative Explanations: From Argumentation to Deontic Logic [1.3272510644778104]
This paper examines how a notion of stable explanation can be expressed in the context of formal argumentation.
We show how to build from argumentation neighborhood structures for deontic logic where this notion of explanation can be characterised.
arXiv Detail & Related papers (2023-07-11T10:26:05Z) - Abductive Commonsense Reasoning Exploiting Mutually Exclusive
Explanations [118.0818807474809]
Abductive reasoning aims to find plausible explanations for an event.
Existing approaches for abductive reasoning in natural language processing often rely on manually generated annotations for supervision.
This work proposes an approach for abductive commonsense reasoning that exploits the fact that only a subset of explanations is correct for a given context.
arXiv Detail & Related papers (2023-05-24T01:35:10Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - NELLIE: A Neuro-Symbolic Inference Engine for Grounded, Compositional, and Explainable Reasoning [59.16962123636579]
This paper proposes a new take on Prolog-based inference engines.
We replace handcrafted rules with a combination of neural language modeling, guided generation, and semi dense retrieval.
Our implementation, NELLIE, is the first system to demonstrate fully interpretable, end-to-end grounded QA.
arXiv Detail & Related papers (2022-09-16T00:54:44Z) - Towards Interpretable Natural Language Understanding with Explanations
as Latent Variables [146.83882632854485]
We develop a framework for interpretable natural language understanding that requires only a small set of human annotated explanations for training.
Our framework treats natural language explanations as latent variables that model the underlying reasoning process of a neural model.
arXiv Detail & Related papers (2020-10-24T02:05:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.