Formal Proofs as Structured Explanations: Proposing Several Tasks on
Explainable Natural Language Inference
- URL: http://arxiv.org/abs/2311.08637v1
- Date: Wed, 15 Nov 2023 01:24:09 GMT
- Title: Formal Proofs as Structured Explanations: Proposing Several Tasks on
Explainable Natural Language Inference
- Authors: Lasha Abzianidze
- Abstract summary: We show how it can be used to define NLI tasks with structured explanations.
The proposed tasks can be ordered according to difficulty defined in terms of the granularity of explanations.
- Score: 0.16317061277457
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this position paper, we propose a way of exploiting formal proofs to put
forward several explainable natural language inference (NLI) tasks. The formal
proofs will be produced by a reliable and high-performing logic-based NLI
system. Taking advantage of the in-depth information available in the generated
formal proofs, we show how it can be used to define NLI tasks with structured
explanations. The proposed tasks can be ordered according to difficulty defined
in terms of the granularity of explanations. We argue that the tasks will
suffer with substantially fewer shortcomings than the existing explainable NLI
tasks (or datasets).
Related papers
- Verification and Refinement of Natural Language Explanations through LLM-Symbolic Theorem Proving [13.485604499678262]
This paper investigates the verification and refinement of natural language explanations through the integration of Large Language Models (LLMs) and Theorem Provers (TPs)
We present a neuro-symbolic framework, named Explanation-Refiner, that integrates TPs with LLMs to generate and formalise explanatory sentences.
In turn, the TP is employed to provide formal guarantees on the logical validity of the explanations and to generate feedback for subsequent improvements.
arXiv Detail & Related papers (2024-05-02T15:20:01Z) - An Incomplete Loop: Deductive, Inductive, and Abductive Learning in Large Language Models [99.31449616860291]
Modern language models (LMs) can learn to perform new tasks in different ways.
In instruction following, the target task is described explicitly in natural language; in few-shot prompting, the task is specified implicitly.
In instruction inference, LMs are presented with in-context examples and are then prompted to generate a natural language task description.
arXiv Detail & Related papers (2024-04-03T19:31:56Z) - Can LLMs Produce Faithful Explanations For Fact-checking? Towards
Faithful Explainable Fact-Checking via Multi-Agent Debate [75.10515686215177]
Large Language Models (LLMs) excel in text generation, but their capability for producing faithful explanations in fact-checking remains underexamined.
We propose the Multi-Agent Debate Refinement (MADR) framework, leveraging multiple LLMs as agents with diverse roles.
MADR ensures that the final explanation undergoes rigorous validation, significantly reducing the likelihood of unfaithful elements and aligning closely with the provided evidence.
arXiv Detail & Related papers (2024-02-12T04:32:33Z) - FaithLM: Towards Faithful Explanations for Large Language Models [67.29893340289779]
Large Language Models (LLMs) have become proficient in addressing complex tasks by leveraging their internal knowledge and reasoning capabilities.
The black-box nature of these models complicates the task of explaining their decision-making processes.
We introduce FaithLM to explain the decision of LLMs with natural language (NL) explanations.
arXiv Detail & Related papers (2024-02-07T09:09:14Z) - Logic-Scaffolding: Personalized Aspect-Instructed Recommendation
Explanation Generation using LLMs [20.446594942586604]
We propose a framework called Logic-Scaffolding, that combines the ideas of aspect-based explanation and chain-of-thought prompting to generate explanations through intermediate reasoning steps.
In this paper, we share our experience in building the framework and present an interactive demonstration for exploring our results.
arXiv Detail & Related papers (2023-12-22T00:30:10Z) - Neuro-Symbolic Integration Brings Causal and Reliable Reasoning Proofs [95.07757789781213]
Two lines of approaches are adopted for complex reasoning with LLMs.
One line of work prompts LLMs with various reasoning structures, while the structural outputs can be naturally regarded as intermediate reasoning steps.
The other line of work adopt LLM-free declarative solvers to do the reasoning task, rendering higher reasoning accuracy but lacking interpretability due to the black-box nature of the solvers.
We present a simple extension to the latter line of work. Specifically, we showcase that the intermediate search logs generated by Prolog interpreters can be accessed and interpreted into human-readable reasoning.
arXiv Detail & Related papers (2023-11-16T11:26:21Z) - Parrot Mind: Towards Explaining the Complex Task Reasoning of Pretrained Large Language Models with Template-Content Structure [66.33623392497599]
We show that a structure called template-content structure (T-C structure) can reduce the possible space from exponential level to linear level.
We demonstrate that models can achieve task composition, further reducing the space needed to learn from linear to logarithmic.
arXiv Detail & Related papers (2023-10-09T06:57:45Z) - Trustworthy Formal Natural Language Specifications [3.8073142980733]
This paper shows it is possible to build support for specifications written in expressive subsets of natural language.
We implement a means to provide specifications in a modularly formal subset of English, and have them automatically translated into formal claims.
We produce proof certificates explaining how each word was interpreted and how the sentence's structure was used to compute the meaning.
arXiv Detail & Related papers (2023-10-05T20:41:47Z) - From Robustness to Explainability and Back Again [0.685316573653194]
The paper addresses the limitation of scalability of formal explainability, and proposes novel algorithms for computing formal explanations.
The proposed algorithm computes explanations by answering instead a number of robustness queries, and such that the number of such queries is at most linear on the number of features.
The experiments validate the practical efficiency of the proposed approach.
arXiv Detail & Related papers (2023-06-05T17:21:05Z) - Explanation Selection Using Unlabeled Data for Chain-of-Thought
Prompting [80.9896041501715]
Explanations that have not been "tuned" for a task, such as off-the-shelf explanations written by nonexperts, may lead to mediocre performance.
This paper tackles the problem of how to optimize explanation-infused prompts in a blackbox fashion.
arXiv Detail & Related papers (2023-02-09T18:02:34Z) - NILE : Natural Language Inference with Faithful Natural Language
Explanations [10.074153632701952]
We propose Natural-language Inference over Label-specific Explanations (NILE)
NILE is a novel NLI method which utilizes auto-generated label-specific explanations to produce labels along with its faithful explanation.
We discuss the faithfulness of NILE's explanations in terms of sensitivity of the decisions to the corresponding explanations.
arXiv Detail & Related papers (2020-05-25T13:56:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.