Behavior of k-NN as an Instance-Based Explanation Method
- URL: http://arxiv.org/abs/2109.06999v1
- Date: Tue, 14 Sep 2021 22:32:19 GMT
- Title: Behavior of k-NN as an Instance-Based Explanation Method
- Authors: Chhavi Yadav and Kamalika Chaudhuri
- Abstract summary: Instance-based explanation methods are a popular type that return selective instances from the training set to explain predictions for a test sample.
Our paper answers this question for k-NNs which are natural contenders for an instance-based explanation method.
- Score: 26.27046865670577
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adoption of DL models in critical areas has led to an escalating demand for
sound explanation methods. Instance-based explanation methods are a popular
type that return selective instances from the training set to explain the
predictions for a test sample. One way to connect these explanations with
prediction is to ask the following counterfactual question - how does the loss
and prediction for a test sample change when explanations are removed from the
training set? Our paper answers this question for k-NNs which are natural
contenders for an instance-based explanation method. We first demonstrate
empirically that the representation space induced by last layer of a neural
network is the best to perform k-NN in. Using this layer, we conduct our
experiments and compare them to influence functions (IFs)
~\cite{koh2017understanding} which try to answer a similar question. Our
evaluations do indicate change in loss and predictions when explanations are
removed but we do not find a trend between $k$ and loss or prediction change.
We find significant stability in the predictions and loss of MNIST vs.
CIFAR-10. Surprisingly, we do not observe much difference in the behavior of
k-NNs vs. IFs on this question. We attribute this to training set subsampling
for IFs.
Related papers
- Deep Limit Model-free Prediction in Regression [0.0]
We provide a Model-free approach based on Deep Neural Network (DNN) to accomplish point prediction and prediction interval under a general regression setting.
Our method is more stable and accurate compared to other DNN-based counterparts, especially for optimal point predictions.
arXiv Detail & Related papers (2024-08-18T16:37:53Z) - Explainable Graph Neural Networks Under Fire [69.15708723429307]
Graph neural networks (GNNs) usually lack interpretability due to their complex computational behavior and the abstract nature of graphs.
Most GNN explanation methods work in a post-hoc manner and provide explanations in the form of a small subset of important edges and/or nodes.
In this paper we demonstrate that these explanations can unfortunately not be trusted, as common GNN explanation methods turn out to be highly susceptible to adversarial perturbations.
arXiv Detail & Related papers (2024-06-10T16:09:16Z) - Empowering Counterfactual Reasoning over Graph Neural Networks through
Inductivity [7.094238868711952]
Graph neural networks (GNNs) have various practical applications, such as drug discovery, recommendation engines, and chip design.
Counterfactual reasoning is used to make minimal changes to the input graph of a GNN in order to alter its prediction.
arXiv Detail & Related papers (2023-06-07T23:40:18Z) - Boosted Dynamic Neural Networks [53.559833501288146]
A typical EDNN has multiple prediction heads at different layers of the network backbone.
To optimize the model, these prediction heads together with the network backbone are trained on every batch of training data.
Treating training and testing inputs differently at the two phases will cause the mismatch between training and testing data distributions.
We formulate an EDNN as an additive model inspired by gradient boosting, and propose multiple training techniques to optimize the model effectively.
arXiv Detail & Related papers (2022-11-30T04:23:12Z) - Causality for Inherently Explainable Transformers: CAT-XPLAIN [16.85887568521622]
We utilize a recently proposed instance-wise post-hoc causal explanation method to make an existing transformer architecture inherently explainable.
Our model provides an explanation in the form of top-$k$ regions in the input space of the given instance contributing to its decision.
arXiv Detail & Related papers (2022-06-29T18:11:01Z) - Reinforced Causal Explainer for Graph Neural Networks [112.57265240212001]
Explainability is crucial for probing graph neural networks (GNNs)
We propose a reinforcement learning agent, Reinforced Causal Explainer (RC-Explainer)
RC-Explainer generates faithful and concise explanations, and has a better power to unseen graphs.
arXiv Detail & Related papers (2022-04-23T09:13:25Z) - Discovering Invariant Rationales for Graph Neural Networks [104.61908788639052]
Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input graph's features.
We propose a new strategy of discovering invariant rationale (DIR) to construct intrinsically interpretable GNNs.
arXiv Detail & Related papers (2022-01-30T16:43:40Z) - Optimization Variance: Exploring Generalization Properties of DNNs [83.78477167211315]
The test error of a deep neural network (DNN) often demonstrates double descent.
We propose a novel metric, optimization variance (OV), to measure the diversity of model updates.
arXiv Detail & Related papers (2021-06-03T09:34:17Z) - Correcting Classification: A Bayesian Framework Using Explanation
Feedback to Improve Classification Abilities [2.0931163605360115]
Explanations are social, meaning they are a transfer of knowledge through interactions.
We overcome these difficulties by training a Bayesian convolutional neural network (CNN) that uses explanation feedback.
Our proposed method utilizes this feedback for fine-tuning to correct the model such that the explanations and classifications improve.
arXiv Detail & Related papers (2021-04-29T13:59:21Z) - ECINN: Efficient Counterfactuals from Invertible Neural Networks [80.94500245955591]
We propose a method, ECINN, that utilizes the generative capacities of invertible neural networks for image classification to generate counterfactual examples efficiently.
ECINN has a closed-form expression and generates a counterfactual in the time of only two evaluations.
Our experiments demonstrate how ECINN alters class-dependent image regions to change the perceptual and predicted class of the counterfactuals.
arXiv Detail & Related papers (2021-03-25T09:23:24Z) - CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks [40.47070962945751]
Graph neural networks (GNNs) have shown increasing promise in real-world applications.
We propose CF-GNNExplainer: the first method for generating counterfactual explanations for GNNs.
arXiv Detail & Related papers (2021-02-05T17:58:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.