Knowledge-based XAI through CBR: There is more to explanations than
models can tell
- URL: http://arxiv.org/abs/2108.10363v1
- Date: Mon, 23 Aug 2021 19:01:43 GMT
- Title: Knowledge-based XAI through CBR: There is more to explanations than
models can tell
- Authors: Rosina Weber, Manil Shrestha, Adam J Johs
- Abstract summary: We propose to use domain knowledge to complement the data used by data-centric artificial intelligence agents.
We formulate knowledge-based explainable artificial intelligence as a supervised data classification problem aligned with the CBR methodology.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The underlying hypothesis of knowledge-based explainable artificial
intelligence is the data required for data-centric artificial intelligence
agents (e.g., neural networks) are less diverse in contents than the data
required to explain the decisions of such agents to humans. The idea is that a
classifier can attain high accuracy using data that express a phenomenon from
one perspective whereas the audience of explanations can entail multiple
stakeholders and span diverse perspectives. We hence propose to use domain
knowledge to complement the data used by agents. We formulate knowledge-based
explainable artificial intelligence as a supervised data classification problem
aligned with the CBR methodology. In this formulation, the inputs are case
problems composed of both the inputs and outputs of the data-centric agent and
case solutions, the outputs, are explanation categories obtained from domain
knowledge and subject matter experts. This formulation does not typically lead
to an accurate classification, preventing the selection of the correct
explanation category. Knowledge-based explainable artificial intelligence
extends the data in this formulation by adding features aligned with domain
knowledge that can increase accuracy when selecting explanation categories.
Related papers
- Understanding Generative AI Content with Embedding Models [4.662332573448995]
This work views the internal representations of modern deep neural networks (DNNs) as an automated form of traditional feature engineering.
We show that these embeddings can reveal interpretable, high-level concepts in unstructured sample data.
We find empirical evidence that there is inherent separability between real data and that generated from AI models.
arXiv Detail & Related papers (2024-08-19T22:07:05Z) - XAL: EXplainable Active Learning Makes Classifiers Better Low-resource Learners [71.8257151788923]
We propose a novel Explainable Active Learning framework (XAL) for low-resource text classification.
XAL encourages classifiers to justify their inferences and delve into unlabeled data for which they cannot provide reasonable explanations.
Experiments on six datasets show that XAL achieves consistent improvement over 9 strong baselines.
arXiv Detail & Related papers (2023-10-09T08:07:04Z) - Attribution-Scores and Causal Counterfactuals as Explanations in
Artificial Intelligence [0.0]
We highlight the relevance of explanations for artificial intelligence, in general, and for the newer developments in em explainable AI
We describe in simple terms, explanations in data management and machine learning that are based on attribution-scores, and counterfactuals as found in the area of causality.
arXiv Detail & Related papers (2023-03-06T01:46:51Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - On Explainability in AI-Solutions: A Cross-Domain Survey [4.394025678691688]
In automatically deriving a system model, AI algorithms learn relations in data that are not detectable for humans.
The more complex a model, the more difficult it is for a human to understand the reasoning for the decisions.
This work provides an extensive survey of literature on this topic, which, to a large part, consists of other surveys.
arXiv Detail & Related papers (2022-10-11T06:21:47Z) - Do Deep Neural Networks Always Perform Better When Eating More Data? [82.6459747000664]
We design experiments from Identically Independent Distribution(IID) and Out of Distribution(OOD)
Under IID condition, the amount of information determines the effectivity of each sample, the contribution of samples and difference between classes determine the amount of class information.
Under OOD condition, the cross-domain degree of samples determine the contributions, and the bias-fitting caused by irrelevant elements is a significant factor of cross-domain.
arXiv Detail & Related papers (2022-05-30T15:40:33Z) - Enriching Artificial Intelligence Explanations with Knowledge Fragments [0.415623340386296]
This research builds explanations considering feature rankings for a particular forecast, enriching them with media news entries, datasets' metadata, and entries from the Google Knowledge Graph.
We compare two approaches (embeddings-based and semantic-based) on a real-world use case regarding demand forecasting.
arXiv Detail & Related papers (2022-04-12T07:19:30Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - Explainable AI for Classification using Probabilistic Logic Inference [9.656846523452502]
We present an explainable classification method.
Our method works by first constructing a symbolic Knowledge Base from the training data, and then performing probabilistic inferences on such Knowledge Base with linear programming.
It identifies decisive features that are responsible for a classification as explanations and produces results similar to the ones found by SHAP, a state of the artley Value based method.
arXiv Detail & Related papers (2020-05-05T11:39:23Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.