Logic for Explainable AI
- URL: http://arxiv.org/abs/2305.05172v1
- Date: Tue, 9 May 2023 04:53:57 GMT
- Title: Logic for Explainable AI
- Authors: Adnan Darwiche
- Abstract summary: A central quest in explainable AI relates to understanding the decisions made by (learned) classifiers.
We discuss in this tutorial a comprehensive, semantical and computational theory of explainability along these dimensions.
- Score: 11.358487655918676
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A central quest in explainable AI relates to understanding the decisions made
by (learned) classifiers. There are three dimensions of this understanding that
have been receiving significant attention in recent years. The first dimension
relates to characterizing conditions on instances that are necessary and
sufficient for decisions, therefore providing abstractions of instances that
can be viewed as the "reasons behind decisions." The next dimension relates to
characterizing minimal conditions that are sufficient for a decision, therefore
identifying maximal aspects of the instance that are irrelevant to the
decision. The last dimension relates to characterizing minimal conditions that
are necessary for a decision, therefore identifying minimal perturbations to
the instance that yield alternate decisions. We discuss in this tutorial a
comprehensive, semantical and computational theory of explainability along
these dimensions which is based on some recent developments in symbolic logic.
The tutorial will also discuss how this theory is particularly applicable to
non-symbolic classifiers such as those based on Bayesian networks, decision
trees, random forests and some types of neural networks.
Related papers
- On rough mereology and VC-dimension in treatment of decision prediction for open world decision systems [0.0]
It is crucial for online learning when each new object must have a predicted decision value.
The approach we propose is founded in the theory of rough mereology and it requires a theory of sets/concepts.
We apply this notion in a procedure to select a decision for new yet unseen objects.
arXiv Detail & Related papers (2024-06-19T08:22:51Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Explainable Data-Driven Optimization: From Context to Decision and Back
Again [76.84947521482631]
Data-driven optimization uses contextual information and machine learning algorithms to find solutions to decision problems with uncertain parameters.
We introduce a counterfactual explanation methodology tailored to explain solutions to data-driven problems.
We demonstrate our approach by explaining key problems in operations management such as inventory management and routing.
arXiv Detail & Related papers (2023-01-24T15:25:16Z) - Explaining Random Forests using Bipolar Argumentation and Markov
Networks (Technical Report) [17.9926469947157]
Random forests are decision tree ensembles that can be used to solve a variety of machine learning problems.
In order to reason about the decision process, we propose representing it as an argumentation problem.
We generalize sufficient and necessary argumentative explanations using a Markov network encoding, discuss the relevance of these explanations and establish relationships to families of abductive explanations from the literature.
arXiv Detail & Related papers (2022-11-21T18:20:50Z) - Neural Causal Models for Counterfactual Identification and Estimation [62.30444687707919]
We study the evaluation of counterfactual statements through neural models.
First, we show that neural causal models (NCMs) are expressive enough.
Second, we develop an algorithm for simultaneously identifying and estimating counterfactual distributions.
arXiv Detail & Related papers (2022-09-30T18:29:09Z) - Attribute Selection using Contranominal Scales [0.09668407688201358]
Formal Concept Analysis (FCA) allows to analyze binary data by deriving concepts and ordering them in lattices.
The size of such a lattice depends on the number of subcontexts in the corresponding formal context.
We propose the algorithm ContraFinder that enables the computation of all contranominal scales of a given formal context.
arXiv Detail & Related papers (2021-06-21T10:53:50Z) - Entropy-based Logic Explanations of Neural Networks [24.43410365335306]
We propose an end-to-end differentiable approach for extracting logic explanations from neural networks.
The method relies on an entropy-based criterion which automatically identifies the most relevant concepts.
We consider four different case studies to demonstrate that: (i) this entropy-based criterion enables the distillation of concise logic explanations in safety-critical domains from clinical data to computer vision; (ii) the proposed approach outperforms state-of-the-art white-box models in terms of classification accuracy.
arXiv Detail & Related papers (2021-06-12T15:50:47Z) - Logical Neural Networks [51.46602187496816]
We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning)
Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation.
Inference is omni rather than focused on predefined target variables, and corresponds to logical reasoning.
arXiv Detail & Related papers (2020-06-23T16:55:45Z) - From Checking to Inference: Actual Causality Computations as
Optimization Problems [79.87179017975235]
We present a novel approach to formulate different notions of causal reasoning, over binary acyclic models, as optimization problems.
We show that both notions are efficiently automated. Using models with more than $8000$ variables, checking is computed in a matter of seconds, with MaxSAT outperforming ILP in many cases.
arXiv Detail & Related papers (2020-06-05T10:56:52Z) - Evaluations and Methods for Explanation through Robustness Analysis [117.7235152610957]
We establish a novel set of evaluation criteria for such feature based explanations by analysis.
We obtain new explanations that are loosely necessary and sufficient for a prediction.
We extend the explanation to extract the set of features that would move the current prediction to a target class.
arXiv Detail & Related papers (2020-05-31T05:52:05Z) - On The Reasons Behind Decisions [11.358487655918676]
We define notions such as sufficient, necessary and complete reasons behind decisions.
We show how these notions can be used to evaluate counterfactual statements.
We present efficient algorithms for computing these notions.
arXiv Detail & Related papers (2020-02-21T13:37:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.