Model Agnostic Multilevel Explanations
- URL: http://arxiv.org/abs/2003.06005v1
- Date: Thu, 12 Mar 2020 20:18:00 GMT
- Title: Model Agnostic Multilevel Explanations
- Authors: Karthikeyan Natesan Ramamurthy, Bhanukiran Vinzamuri, Yunfeng Zhang,
Amit Dhurandhar
- Abstract summary: We propose a meta-method that, given a typical local explainability method, can build a multilevel explanation tree.
The leaves of this tree correspond to the local explanations, the root corresponds to the global explanation, and intermediate levels correspond to explanations for groups data points.
We argue that such a multilevel structure can also be an effective form of communication, where one could obtain few explanations that characterize the entire dataset.
- Score: 31.831973884850147
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, post-hoc local instance-level and global dataset-level
explainability of black-box models has received a lot of attention. Much less
attention has been given to obtaining insights at intermediate or group levels,
which is a need outlined in recent works that study the challenges in realizing
the guidelines in the General Data Protection Regulation (GDPR). In this paper,
we propose a meta-method that, given a typical local explainability method, can
build a multilevel explanation tree. The leaves of this tree correspond to the
local explanations, the root corresponds to the global explanation, and
intermediate levels correspond to explanations for groups of data points that
it automatically clusters. The method can also leverage side information, where
users can specify points for which they may want the explanations to be
similar. We argue that such a multilevel structure can also be an effective
form of communication, where one could obtain few explanations that
characterize the entire dataset by considering an appropriate level in our
explanation tree. Explanations for novel test points can be cost-efficiently
obtained by associating them with the closest training points. When the local
explainability technique is generalized additive (viz. LIME, GAMs), we develop
a fast approximate algorithm for building the multilevel tree and study its
convergence behavior. We validate the effectiveness of the proposed technique
based on two human studies -- one with experts and the other with non-expert
users -- on real world datasets, and show that we produce high fidelity sparse
explanations on several other public datasets.
Related papers
- Attri-Net: A Globally and Locally Inherently Interpretable Model for Multi-Label Classification Using Class-Specific Counterfactuals [4.384272169863716]
Interpretability is crucial for machine learning algorithms in high-stakes medical applications.
Attri-Net is an inherently interpretable model for multi-label classification that provides local and global explanations.
arXiv Detail & Related papers (2024-06-08T13:52:02Z) - GLOBE-CE: A Translation-Based Approach for Global Counterfactual
Explanations [10.276136171459731]
Global & Efficient Counterfactual Explanations (GLOBE-CE) is a flexible framework that tackles the reliability and scalability issues associated with current state-of-the-art.
We provide a unique mathematical analysis of categorical feature translations, utilising it in our method.
Experimental evaluation with publicly available datasets and user studies demonstrate that GLOBE-CE performs significantly better than the current state-of-the-art.
arXiv Detail & Related papers (2023-05-26T15:26:59Z) - Hierarchical clustering with dot products recovers hidden tree structure [53.68551192799585]
In this paper we offer a new perspective on the well established agglomerative clustering algorithm, focusing on recovery of hierarchical structure.
We recommend a simple variant of the standard algorithm, in which clusters are merged by maximum average dot product and not, for example, by minimum distance or within-cluster variance.
We demonstrate that the tree output by this algorithm provides a bona fide estimate of generative hierarchical structure in data, under a generic probabilistic graphical model.
arXiv Detail & Related papers (2023-05-24T11:05:12Z) - A Unified Understanding of Deep NLP Models for Text Classification [88.35418976241057]
We have developed a visual analysis tool, DeepNLPVis, to enable a unified understanding of NLP models for text classification.
The key idea is a mutual information-based measure, which provides quantitative explanations on how each layer of a model maintains the information of input words in a sample.
A multi-level visualization, which consists of a corpus-level, a sample-level, and a word-level visualization, supports the analysis from the overall training set to individual samples.
arXiv Detail & Related papers (2022-06-19T08:55:07Z) - Entailment Tree Explanations via Iterative Retrieval-Generation Reasoner [56.08919422452905]
We propose an architecture called Iterative Retrieval-Generation Reasoner (IRGR)
Our model is able to explain a given hypothesis by systematically generating a step-by-step explanation from textual premises.
We outperform existing benchmarks on premise retrieval and entailment tree generation, with around 300% gain in overall correctness.
arXiv Detail & Related papers (2022-05-18T21:52:11Z) - Hierarchical clustering by aggregating representatives in
sub-minimum-spanning-trees [5.877624540482919]
We propose a novel hierarchical clustering algorithm, in which, while building the clustering dendrogram, we can effectively detect the representative point.
Under our analysis, the proposed algorithm has O(nlogn) time-complexity and O(logn) space-complexity, indicating that it has the scalability in handling massive data.
arXiv Detail & Related papers (2021-11-11T07:36:55Z) - Best of both worlds: local and global explanations with
human-understandable concepts [10.155485106226754]
Interpretability techniques aim to provide the rationale behind a model's decision, typically by explaining either an individual prediction or a class of predictions.
We show that our method improves global explanations over TCAV when compared to ground truth, and provides useful insights.
arXiv Detail & Related papers (2021-06-16T09:05:25Z) - Self-supervised Graph-level Representation Learning with Local and
Global Structure [71.45196938842608]
We propose a unified framework called Local-instance and Global-semantic Learning (GraphLoG) for self-supervised whole-graph representation learning.
Besides preserving the local similarities, GraphLoG introduces the hierarchical prototypes to capture the global semantic clusters.
An efficient online expectation-maximization (EM) algorithm is further developed for learning the model.
arXiv Detail & Related papers (2021-06-08T05:25:38Z) - Deep Descriptive Clustering [24.237000220172906]
This paper explores a novel setting for performing clustering on complex data while simultaneously generating explanations using interpretable tags.
We form good clusters by maximizing the mutual information between empirical distribution on the inputs and the induced clustering labels for clustering objectives.
Experimental results on public data demonstrate that our model outperforms competitive baselines in clustering performance.
arXiv Detail & Related papers (2021-05-24T21:40:16Z) - Structured Graph Learning for Clustering and Semi-supervised
Classification [74.35376212789132]
We propose a graph learning framework to preserve both the local and global structure of data.
Our method uses the self-expressiveness of samples to capture the global structure and adaptive neighbor approach to respect the local structure.
Our model is equivalent to a combination of kernel k-means and k-means methods under certain condition.
arXiv Detail & Related papers (2020-08-31T08:41:20Z) - Graph Inference Learning for Semi-supervised Classification [50.55765399527556]
We propose a Graph Inference Learning framework to boost the performance of semi-supervised node classification.
For learning the inference process, we introduce meta-optimization on structure relations from training nodes to validation nodes.
Comprehensive evaluations on four benchmark datasets demonstrate the superiority of our proposed GIL when compared against state-of-the-art methods.
arXiv Detail & Related papers (2020-01-17T02:52:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.