SAT-Based Rigorous Explanations for Decision Lists
- URL: http://arxiv.org/abs/2105.06782v1
- Date: Fri, 14 May 2021 12:06:12 GMT
- Title: SAT-Based Rigorous Explanations for Decision Lists
- Authors: Alexey Ignatiev and Joao Marques-Silva
- Abstract summary: Decision lists (DLs) find a wide range of uses for classification problems in Machine Learning (ML)
We argue that interpretability is an elusive goal for some DLs.
This paper shows that computing explanations for DLs is computationally hard.
- Score: 17.054229845836332
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decision lists (DLs) find a wide range of uses for classification problems in
Machine Learning (ML), being implemented in a number of ML frameworks. DLs are
often perceived as interpretable. However, building on recent results for
decision trees (DTs), we argue that interpretability is an elusive goal for
some DLs. As a result, for some uses of DLs, it will be important to compute
(rigorous) explanations. Unfortunately, and in clear contrast with the case of
DTs, this paper shows that computing explanations for DLs is computationally
hard. Motivated by this result, the paper proposes propositional encodings for
computing abductive explanations (AXps) and contrastive explanations (CXps) of
DLs. Furthermore, the paper investigates the practical efficiency of a
MARCO-like approach for enumerating explanations. The experimental results
demonstrate that, for DLs used in practical settings, the use of SAT oracles
offers a very efficient solution, and that complete enumeration of explanations
is most often feasible.
Related papers
- FaithLM: Towards Faithful Explanations for Large Language Models [67.29893340289779]
Large Language Models (LLMs) have become proficient in addressing complex tasks by leveraging their internal knowledge and reasoning capabilities.
The black-box nature of these models complicates the task of explaining their decision-making processes.
We introduce FaithLM to explain the decision of LLMs with natural language (NL) explanations.
arXiv Detail & Related papers (2024-02-07T09:09:14Z) - Pyreal: A Framework for Interpretable ML Explanations [51.14710806705126]
Pyreal is a system for generating a variety of interpretable machine learning explanations.
Pyreal converts data and explanations between the feature spaces expected by the model, relevant explanation algorithms, and human users.
Our studies demonstrate that Pyreal generates more useful explanations than existing systems.
arXiv Detail & Related papers (2023-12-20T15:04:52Z) - LaRS: Latent Reasoning Skills for Chain-of-Thought Reasoning [61.7853049843921]
Chain-of-thought (CoT) prompting is a popular in-context learning approach for large language models (LLMs)
This paper introduces a new approach named Latent Reasoning Skills (LaRS) that employs unsupervised learning to create a latent space representation of rationales.
arXiv Detail & Related papers (2023-12-07T20:36:10Z) - Neuro-Symbolic Integration Brings Causal and Reliable Reasoning Proofs [95.07757789781213]
Two lines of approaches are adopted for complex reasoning with LLMs.
One line of work prompts LLMs with various reasoning structures, while the structural outputs can be naturally regarded as intermediate reasoning steps.
The other line of work adopt LLM-free declarative solvers to do the reasoning task, rendering higher reasoning accuracy but lacking interpretability due to the black-box nature of the solvers.
We present a simple extension to the latter line of work. Specifically, we showcase that the intermediate search logs generated by Prolog interpreters can be accessed and interpreted into human-readable reasoning.
arXiv Detail & Related papers (2023-11-16T11:26:21Z) - Description Logics with Abstraction and Refinement [8.958066641323894]
We propose an extension of description logics (DLs) in which abstraction levels are first-class citizens.
We prove that reasoning in the resulting family of DLs is decidable while several seemingly harmless variations turn out to be undecidable.
arXiv Detail & Related papers (2023-06-06T14:27:03Z) - Interpretability at Scale: Identifying Causal Mechanisms in Alpaca [62.65877150123775]
We use Boundless DAS to efficiently search for interpretable causal structure in large language models while they follow instructions.
Our findings mark a first step toward faithfully understanding the inner-workings of our ever-growing and most widely deployed language models.
arXiv Detail & Related papers (2023-05-15T17:15:40Z) - Logic of Differentiable Logics: Towards a Uniform Semantics of DL [1.1549572298362787]
Differentiable logics (DLs) have been proposed as a method of training neural networks to satisfy logical specifications.
This paper proposes a meta-language for defining DLs that we call the Logic of Differentiable Logics, or LDL.
We use LDL to establish several theoretical properties of existing DLs, and to conduct their empirical study in neural network verification.
arXiv Detail & Related papers (2023-03-19T13:03:51Z) - Complementary Explanations for Effective In-Context Learning [77.83124315634386]
Large language models (LLMs) have exhibited remarkable capabilities in learning from explanations in prompts.
This work aims to better understand the mechanisms by which explanations are used for in-context learning.
arXiv Detail & Related papers (2022-11-25T04:40:47Z) - On Tackling Explanation Redundancy in Decision Trees [19.833126971063724]
Decision trees (DTs) epitomize the ideal of interpretability of machine learning (ML) models.
This paper offers both theoretical and experimental arguments demonstrating that, as long as interpretability of decision trees equates with succinctness of explanations, then decision trees ought not to be deemed interpretable.
arXiv Detail & Related papers (2022-05-20T05:33:38Z) - Provably Precise, Succinct and Efficient Explanations for Decision Trees [32.062312674333775]
Decision trees (DTs) embody interpretable classifiers.
Work has demonstrated that predictions in DTs ought to be explained with rigorous explanations.
delta-relevant sets denote that are succinct and provably precise.
arXiv Detail & Related papers (2022-05-19T13:54:52Z) - Defeasible reasoning in Description Logics: an overview on DL^N [10.151828072611426]
We provide an overview on DLN, illustrating the underlying knowledge engineering requirements as well as the characteristic features that preserve DLN from some recurrent semantic and computational drawbacks.
We also compare DLN with some alternative nonmonotonic semantics, enlightening the relationships between the KLMs and DLN.
arXiv Detail & Related papers (2020-09-10T16:30:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.