Towards Analogy-Based Explanations in Machine Learning
- URL: http://arxiv.org/abs/2005.12800v1
- Date: Sat, 23 May 2020 06:41:35 GMT
- Title: Towards Analogy-Based Explanations in Machine Learning
- Authors: Eyke H\"ullermeier
- Abstract summary: We argue that analogical reasoning is not less interesting from an interpretability and explainability point of view.
An analogy-based approach is a viable alternative to existing approaches in the realm of explainable AI and interpretable machine learning.
- Score: 3.1410342959104725
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Principles of analogical reasoning have recently been applied in the context
of machine learning, for example to develop new methods for classification and
preference learning. In this paper, we argue that, while analogical reasoning
is certainly useful for constructing new learning algorithms with high
predictive accuracy, is is arguably not less interesting from an
interpretability and explainability point of view. More specifically, we take
the view that an analogy-based approach is a viable alternative to existing
approaches in the realm of explainable AI and interpretable machine learning,
and that analogy-based explanations of the predictions produced by a machine
learning algorithm can complement similarity-based explanations in a meaningful
way. To corroborate these claims, we outline the basic idea of an analogy-based
explanation and illustrate its potential usefulness by means of some examples.
Related papers
- Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Learning with Explanation Constraints [91.23736536228485]
We provide a learning theoretic framework to analyze how explanations can improve the learning of our models.
We demonstrate the benefits of our approach over a large array of synthetic and real-world experiments.
arXiv Detail & Related papers (2023-03-25T15:06:47Z) - Understanding Post-hoc Explainers: The Case of Anchors [6.681943980068051]
We present a theoretical analysis of a rule-based interpretability method that highlights a small set of words to explain a text's decision.
After formalizing its algorithm and providing useful insights, we demonstrate mathematically that Anchors produces meaningful results.
arXiv Detail & Related papers (2023-03-15T17:56:34Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - Analogies and Feature Attributions for Model Agnostic Explanation of
Similarity Learners [29.63747822793279]
We propose a method that provides feature attributions to explain the similarity between a pair of inputs as determined by a black box similarity learner.
Here the goal is to identify diverse analogous pairs of examples that share the same level of similarity as the input pair.
We prove that our analogy objective function is submodular, making the search for good-quality analogies efficient.
arXiv Detail & Related papers (2022-02-02T17:28:56Z) - Probably Approximately Correct Explanations of Machine Learning Models
via Syntax-Guided Synthesis [6.624726878647541]
We propose a novel approach to understanding the decision making of complex machine learning models (e.g., deep neural networks) using a combination of probably approximately correct learning (PAC) and a logic inference methodology called syntax-guided synthesis (SyGuS)
We prove that our framework produces explanations that with a high probability make only few errors and show empirically that it is effective in generating small, human-interpretable explanations.
arXiv Detail & Related papers (2020-09-18T12:10:49Z) - Learning explanations that are hard to vary [75.30552491694066]
We show that averaging across examples can favor memorization and patchwork' solutions that sew together different strategies.
We then propose and experimentally validate a simple alternative algorithm based on a logical AND.
arXiv Detail & Related papers (2020-09-01T10:17:48Z) - Evaluations and Methods for Explanation through Robustness Analysis [117.7235152610957]
We establish a novel set of evaluation criteria for such feature based explanations by analysis.
We obtain new explanations that are loosely necessary and sufficient for a prediction.
We extend the explanation to extract the set of features that would move the current prediction to a target class.
arXiv Detail & Related papers (2020-05-31T05:52:05Z) - Explainable AI for Classification using Probabilistic Logic Inference [9.656846523452502]
We present an explainable classification method.
Our method works by first constructing a symbolic Knowledge Base from the training data, and then performing probabilistic inferences on such Knowledge Base with linear programming.
It identifies decisive features that are responsible for a classification as explanations and produces results similar to the ones found by SHAP, a state of the artley Value based method.
arXiv Detail & Related papers (2020-05-05T11:39:23Z) - Ontology-based Interpretable Machine Learning for Textual Data [35.01650633374998]
We introduce a novel interpreting framework that learns an interpretable model based on sampling technique to explain prediction models.
To narrow down the search space for explanations, we design a learnable anchor algorithm.
A set of regulations is further introduced, regarding combining learned interpretable representations with anchors to generate comprehensible explanations.
arXiv Detail & Related papers (2020-04-01T02:51:57Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.