Explainable AI for Classification using Probabilistic Logic Inference
- URL: http://arxiv.org/abs/2005.02074v1
- Date: Tue, 5 May 2020 11:39:23 GMT
- Title: Explainable AI for Classification using Probabilistic Logic Inference
- Authors: Xiuyi Fan and Siyuan Liu and Thomas C. Henderson
- Abstract summary: We present an explainable classification method.
Our method works by first constructing a symbolic Knowledge Base from the training data, and then performing probabilistic inferences on such Knowledge Base with linear programming.
It identifies decisive features that are responsible for a classification as explanations and produces results similar to the ones found by SHAP, a state of the artley Value based method.
- Score: 9.656846523452502
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The overarching goal of Explainable AI is to develop systems that not only
exhibit intelligent behaviours, but also are able to explain their rationale
and reveal insights. In explainable machine learning, methods that produce a
high level of prediction accuracy as well as transparent explanations are
valuable. In this work, we present an explainable classification method. Our
method works by first constructing a symbolic Knowledge Base from the training
data, and then performing probabilistic inferences on such Knowledge Base with
linear programming. Our approach achieves a level of learning performance
comparable to that of traditional classifiers such as random forests, support
vector machines and neural networks. It identifies decisive features that are
responsible for a classification as explanations and produces results similar
to the ones found by SHAP, a state of the art Shapley Value based method. Our
algorithms perform well on a range of synthetic and non-synthetic data sets.
Related papers
- Simple and Interpretable Probabilistic Classifiers for Knowledge Graphs [0.0]
We describe an inductive approach based on learning simple belief networks.
We show how such models can be converted into (probabilistic) axioms (or rules)
arXiv Detail & Related papers (2024-07-09T17:05:52Z) - An AI Architecture with the Capability to Explain Recognition Results [0.0]
This research focuses on the importance of metrics to explainability and contributes two methods yielding performance gains.
The first method introduces a combination of explainable and unexplainable flows, proposing a metric to characterize explainability of a decision.
The second method compares classic metrics for estimating the effectiveness of neural networks in the system, posing a new metric as the leading performer.
arXiv Detail & Related papers (2024-06-13T02:00:13Z) - XAL: EXplainable Active Learning Makes Classifiers Better Low-resource Learners [71.8257151788923]
We propose a novel Explainable Active Learning framework (XAL) for low-resource text classification.
XAL encourages classifiers to justify their inferences and delve into unlabeled data for which they cannot provide reasonable explanations.
Experiments on six datasets show that XAL achieves consistent improvement over 9 strong baselines.
arXiv Detail & Related papers (2023-10-09T08:07:04Z) - FIND: A Function Description Benchmark for Evaluating Interpretability
Methods [86.80718559904854]
This paper introduces FIND (Function INterpretation and Description), a benchmark suite for evaluating automated interpretability methods.
FIND contains functions that resemble components of trained neural networks, and accompanying descriptions of the kind we seek to generate.
We evaluate methods that use pretrained language models to produce descriptions of function behavior in natural language and code.
arXiv Detail & Related papers (2023-09-07T17:47:26Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Multiclass classification utilising an estimated algorithmic probability
prior [0.5156484100374058]
We study how algorithmic information theory, especially algorithmic probability, may aid in a machine learning task.
This work is one of the first to demonstrate how algorithmic probability can aid in a concrete, real-world, machine learning problem.
arXiv Detail & Related papers (2022-12-14T07:50:12Z) - Provable concept learning for interpretable predictions using
variational inference [7.0349768355860895]
In safety critical applications, practitioners are reluctant to trust neural networks when no interpretable explanations are available.
We propose a probabilistic modeling framework to derive (C)oncept (L)earning and (P)rediction (CLAP)
We prove that our method is able to identify them while attaining optimal classification accuracy.
arXiv Detail & Related papers (2022-04-01T14:51:38Z) - Learning Gradual Argumentation Frameworks using Genetic Algorithms [5.953590600890214]
We propose a genetic algorithm to simultaneously learn the structure of argumentative classification models.
Our prototype learns argumentative classification models that are comparable to decision trees in terms of learning performance and interpretability.
arXiv Detail & Related papers (2021-06-25T12:33:31Z) - Predicting What You Already Know Helps: Provable Self-Supervised
Learning [60.27658820909876]
Self-supervised representation learning solves auxiliary prediction tasks (known as pretext tasks) without requiring labeled data.
We show a mechanism exploiting the statistical connections between certain em reconstruction-based pretext tasks that guarantee to learn a good representation.
We prove the linear layer yields small approximation error even for complex ground truth function class.
arXiv Detail & Related papers (2020-08-03T17:56:13Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.