Towards Human-Compatible XAI: Explaining Data Differentials with Concept
Induction over Background Knowledge
- URL: http://arxiv.org/abs/2209.13710v1
- Date: Tue, 27 Sep 2022 21:51:27 GMT
- Title: Towards Human-Compatible XAI: Explaining Data Differentials with Concept
Induction over Background Knowledge
- Authors: Cara Widmer, Md Kamruzzaman Sarker, Srikanth Nadella, Joshua Fiechter,
Ion Juvina, Brandon Minnery, Pascal Hitzler, Joshua Schwartz, Michael Raymer
- Abstract summary: We show that concept induction can be used to explain data differentials in the context of Explainable AI (XAI)
Our approach utilizes a large class hierarchy, curated from the Wikipedia category hierarchy, as background knowledge.
- Score: 2.803567242358594
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Concept induction, which is based on formal logical reasoning over
description logics, has been used in ontology engineering in order to create
ontology (TBox) axioms from the base data (ABox) graph. In this paper, we show
that it can also be used to explain data differentials, for example in the
context of Explainable AI (XAI), and we show that it can in fact be done in a
way that is meaningful to a human observer. Our approach utilizes a large class
hierarchy, curated from the Wikipedia category hierarchy, as background
knowledge.
Related papers
- On the Value of Labeled Data and Symbolic Methods for Hidden Neuron Activation Analysis [1.55858752644861]
State of the art indicates that hidden node activations can, in some cases, be interpretable in a way that makes sense to humans.
We introduce a novel model-agnostic post-hoc Explainable AI method demonstrating that it provides meaningful interpretations.
arXiv Detail & Related papers (2024-04-21T07:57:45Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Concept-Based Explanations for Tabular Data [0.0]
We propose a concept-based explainability for Deep Neural Networks (DNNs)
We show the validity of our method in generating interpretability results that match the human-level intuitions.
We also propose a notion of fairness based on TCAV that quantifies what layer of DNN has learned representations that lead to biased predictions.
arXiv Detail & Related papers (2022-09-13T02:19:29Z) - Diagnosing AI Explanation Methods with Folk Concepts of Behavior [70.10183435379162]
We consider "success" to depend not only on what information the explanation contains, but also on what information the human explainee understands from it.
We use folk concepts of behavior as a framework of social attribution by the human explainee.
arXiv Detail & Related papers (2022-01-27T00:19:41Z) - Unsupervised Causal Binary Concepts Discovery with VAE for Black-box
Model Explanation [28.990604269473657]
We aim to explain a black-box classifier with the form: data X is classified as class Y because X textithas A, B and textitdoes not have C'
arXiv Detail & Related papers (2021-09-09T19:06:53Z) - CX-ToM: Counterfactual Explanations with Theory-of-Mind for Enhancing
Human Trust in Image Recognition Models [84.32751938563426]
We propose a new explainable AI (XAI) framework for explaining decisions made by a deep convolutional neural network (CNN)
In contrast to the current methods in XAI that generate explanations as a single shot response, we pose explanation as an iterative communication process.
Our framework generates sequence of explanations in a dialog by mediating the differences between the minds of machine and human user.
arXiv Detail & Related papers (2021-09-03T09:46:20Z) - Explainable AI for Classification using Probabilistic Logic Inference [9.656846523452502]
We present an explainable classification method.
Our method works by first constructing a symbolic Knowledge Base from the training data, and then performing probabilistic inferences on such Knowledge Base with linear programming.
It identifies decisive features that are responsible for a classification as explanations and produces results similar to the ones found by SHAP, a state of the artley Value based method.
arXiv Detail & Related papers (2020-05-05T11:39:23Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.