Formal Proofs as Structured Explanations: Proposing Several Tasks on Explainable Natural Language Inference
- URL: http://arxiv.org/abs/2311.08637v2
- Date: Thu, 06 Feb 2025 21:50:13 GMT
- Title: Formal Proofs as Structured Explanations: Proposing Several Tasks on Explainable Natural Language Inference
- Authors: Lasha Abzianidze,
- Abstract summary: We propose a reasoning framework that can model the reasoning process underlying natural language inferences.<n>The framework is based on the semantic tableau method, a well-studied proof system in formal logic.<n>We show how it can be used to define natural language reasoning tasks with structured explanations.
- Score: 0.16317061277457
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this position paper, we propose a reasoning framework that can model the reasoning process underlying natural language inferences. The framework is based on the semantic tableau method, a well-studied proof system in formal logic. Like the semantic tableau, the framework is driven by refutation -- something is proved if and only if its counterexample was not refuted. Despite being rooted in formal logic, the framework shares similarities with the mental models, a theory on the psychology of reasoning. We will show how the reasoning framework can facilitate the collection of comprehensive and structured explanations for existing naturalistic inference problems. To make the suggestion more concrete, we propose a method of semi-automatically obtaining structured explanations from the formal proofs of a reliable and high-performing logic-based inference system. Taking advantage of the in-depth information available in the generated formal proofs, we show how it can be used to define natural language reasoning tasks with structured explanations. The proposed tasks can be ordered according to difficulty defined in terms of the granularity of explanations. We argue that the tasks that contain a natural sketch of the proofs will suffer from substantially fewer shortcomings than the existing explainable reasoning tasks (or datasets).
Related papers
- Reasoning-Grounded Natural Language Explanations for Language Models [2.7855886538423182]
We propose a large language model explainability technique for obtaining faithful natural language explanations.
When converted to a sequence of tokens, the outputs of the reasoning process can become part of the model context.
We show that the proposed use of reasoning can also improve the quality of the answers.
arXiv Detail & Related papers (2025-03-14T10:00:03Z) - Dialogue-based Explanations for Logical Reasoning using Structured Argumentation [0.06138671548064355]
We identify structural weaknesses of the state-of-the-art and propose a generic argumentation-based approach to address these problems.
Our work provides dialogue models as dialectic-proof procedures to compute and explain a query answer.
This allows us to construct dialectical proof trees as explanations, which are more expressive and arguably more intuitive than existing explanation formalisms.
arXiv Detail & Related papers (2025-02-16T22:26:18Z) - Lean-STaR: Learning to Interleave Thinking and Proving [53.923617816215774]
We present Lean-STaR, a framework for training language models to produce informal thoughts prior to each step of a proof.
Lean-STaR achieves state-of-the-art results on the miniF2F-test benchmark within the Lean theorem proving environment.
arXiv Detail & Related papers (2024-07-14T01:43:07Z) - Verification and Refinement of Natural Language Explanations through LLM-Symbolic Theorem Proving [13.485604499678262]
This paper investigates the verification and refinement of natural language explanations through the integration of Large Language Models (LLMs) and Theorem Provers (TPs)
We present a neuro-symbolic framework, named Explanation-Refiner, that integrates TPs with LLMs to generate and formalise explanatory sentences.
In turn, the TP is employed to provide formal guarantees on the logical validity of the explanations and to generate feedback for subsequent improvements.
arXiv Detail & Related papers (2024-05-02T15:20:01Z) - An Incomplete Loop: Deductive, Inductive, and Abductive Learning in Large Language Models [99.31449616860291]
Modern language models (LMs) can learn to perform new tasks in different ways.
In instruction following, the target task is described explicitly in natural language; in few-shot prompting, the task is specified implicitly.
In instruction inference, LMs are presented with in-context examples and are then prompted to generate a natural language task description.
arXiv Detail & Related papers (2024-04-03T19:31:56Z) - Towards Generalizable and Faithful Logic Reasoning over Natural Language via Resolution Refutation [24.584926992534346]
We propose a novel framework, named Generalizable and Faithful Reasoner (GFaiR), which introduces the paradigm of resolution refutation.
Resolution refutation has the capability to solve all first-order logic reasoning problems by extending reasoning rules and employing the principle of proof by contradiction.
Our system outperforms previous works by achieving state-of-the-art performances in complex scenarios while maintaining performances in simple scenarios.
arXiv Detail & Related papers (2024-04-02T06:28:44Z) - Can LLMs Produce Faithful Explanations For Fact-checking? Towards
Faithful Explainable Fact-Checking via Multi-Agent Debate [75.10515686215177]
Large Language Models (LLMs) excel in text generation, but their capability for producing faithful explanations in fact-checking remains underexamined.
We propose the Multi-Agent Debate Refinement (MADR) framework, leveraging multiple LLMs as agents with diverse roles.
MADR ensures that the final explanation undergoes rigorous validation, significantly reducing the likelihood of unfaithful elements and aligning closely with the provided evidence.
arXiv Detail & Related papers (2024-02-12T04:32:33Z) - FaithLM: Towards Faithful Explanations for Large Language Models [67.29893340289779]
Large Language Models (LLMs) have become proficient in addressing complex tasks by leveraging their internal knowledge and reasoning capabilities.
The black-box nature of these models complicates the task of explaining their decision-making processes.
We introduce FaithLM to explain the decision of LLMs with natural language (NL) explanations.
arXiv Detail & Related papers (2024-02-07T09:09:14Z) - Logic-Scaffolding: Personalized Aspect-Instructed Recommendation
Explanation Generation using LLMs [20.446594942586604]
We propose a framework called Logic-Scaffolding, that combines the ideas of aspect-based explanation and chain-of-thought prompting to generate explanations through intermediate reasoning steps.
In this paper, we share our experience in building the framework and present an interactive demonstration for exploring our results.
arXiv Detail & Related papers (2023-12-22T00:30:10Z) - Neuro-Symbolic Integration Brings Causal and Reliable Reasoning Proofs [95.07757789781213]
Two lines of approaches are adopted for complex reasoning with LLMs.
One line of work prompts LLMs with various reasoning structures, while the structural outputs can be naturally regarded as intermediate reasoning steps.
The other line of work adopt LLM-free declarative solvers to do the reasoning task, rendering higher reasoning accuracy but lacking interpretability due to the black-box nature of the solvers.
We present a simple extension to the latter line of work. Specifically, we showcase that the intermediate search logs generated by Prolog interpreters can be accessed and interpreted into human-readable reasoning.
arXiv Detail & Related papers (2023-11-16T11:26:21Z) - Parrot Mind: Towards Explaining the Complex Task Reasoning of Pretrained Large Language Models with Template-Content Structure [66.33623392497599]
We show that a structure called template-content structure (T-C structure) can reduce the possible space from exponential level to linear level.
We demonstrate that models can achieve task composition, further reducing the space needed to learn from linear to logarithmic.
arXiv Detail & Related papers (2023-10-09T06:57:45Z) - Trustworthy Formal Natural Language Specifications [3.8073142980733]
This paper shows it is possible to build support for specifications written in expressive subsets of natural language.
We implement a means to provide specifications in a modularly formal subset of English, and have them automatically translated into formal claims.
We produce proof certificates explaining how each word was interpreted and how the sentence's structure was used to compute the meaning.
arXiv Detail & Related papers (2023-10-05T20:41:47Z) - Stable Normative Explanations: From Argumentation to Deontic Logic [1.3272510644778104]
This paper examines how a notion of stable explanation can be expressed in the context of formal argumentation.
We show how to build from argumentation neighborhood structures for deontic logic where this notion of explanation can be characterised.
arXiv Detail & Related papers (2023-07-11T10:26:05Z) - From Robustness to Explainability and Back Again [0.685316573653194]
The paper addresses the limitation of scalability of formal explainability, and proposes novel algorithms for computing formal explanations.
The proposed algorithm computes explanations by answering instead a number of robustness queries, and such that the number of such queries is at most linear on the number of features.
The experiments validate the practical efficiency of the proposed approach.
arXiv Detail & Related papers (2023-06-05T17:21:05Z) - Abductive Commonsense Reasoning Exploiting Mutually Exclusive
Explanations [118.0818807474809]
Abductive reasoning aims to find plausible explanations for an event.
Existing approaches for abductive reasoning in natural language processing often rely on manually generated annotations for supervision.
This work proposes an approach for abductive commonsense reasoning that exploits the fact that only a subset of explanations is correct for a given context.
arXiv Detail & Related papers (2023-05-24T01:35:10Z) - Explanation Selection Using Unlabeled Data for Chain-of-Thought
Prompting [80.9896041501715]
Explanations that have not been "tuned" for a task, such as off-the-shelf explanations written by nonexperts, may lead to mediocre performance.
This paper tackles the problem of how to optimize explanation-infused prompts in a blackbox fashion.
arXiv Detail & Related papers (2023-02-09T18:02:34Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - NELLIE: A Neuro-Symbolic Inference Engine for Grounded, Compositional, and Explainable Reasoning [59.16962123636579]
This paper proposes a new take on Prolog-based inference engines.
We replace handcrafted rules with a combination of neural language modeling, guided generation, and semi dense retrieval.
Our implementation, NELLIE, is the first system to demonstrate fully interpretable, end-to-end grounded QA.
arXiv Detail & Related papers (2022-09-16T00:54:44Z) - Towards Interpretable Natural Language Understanding with Explanations
as Latent Variables [146.83882632854485]
We develop a framework for interpretable natural language understanding that requires only a small set of human annotated explanations for training.
Our framework treats natural language explanations as latent variables that model the underlying reasoning process of a neural model.
arXiv Detail & Related papers (2020-10-24T02:05:56Z) - NILE : Natural Language Inference with Faithful Natural Language
Explanations [10.074153632701952]
We propose Natural-language Inference over Label-specific Explanations (NILE)
NILE is a novel NLI method which utilizes auto-generated label-specific explanations to produce labels along with its faithful explanation.
We discuss the faithfulness of NILE's explanations in terms of sensitivity of the decisions to the corresponding explanations.
arXiv Detail & Related papers (2020-05-25T13:56:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.