Explaining Deep Convolutional Neural Networks for Image Classification
by Evolving Local Interpretable Model-agnostic Explanations
- URL: http://arxiv.org/abs/2211.15143v1
- Date: Mon, 28 Nov 2022 08:56:00 GMT
- Title: Explaining Deep Convolutional Neural Networks for Image Classification
by Evolving Local Interpretable Model-agnostic Explanations
- Authors: Bin Wang, Wenbin Pei, Bing Xue, Mengjie Zhang
- Abstract summary: The proposed method is model-agnostic, i.e., it can be utilised to explain any deep convolutional neural network models.
The evolved local explanations on four images, randomly selected from ImageNet, are presented.
The proposed method can obtain local explanations within one minute, which is more than ten times faster than LIME.
- Score: 7.474973880539888
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep convolutional neural networks have proven their effectiveness, and have
been acknowledged as the most dominant method for image classification.
However, a severe drawback of deep convolutional neural networks is poor
explainability. Unfortunately, in many real-world applications, users need to
understand the rationale behind the predictions of deep convolutional neural
networks when determining whether they should trust the predictions or not. To
resolve this issue, a novel genetic algorithm-based method is proposed for the
first time to automatically evolve local explanations that can assist users to
assess the rationality of the predictions. Furthermore, the proposed method is
model-agnostic, i.e., it can be utilised to explain any deep convolutional
neural network models. In the experiments, ResNet is used as an example model
to be explained, and the ImageNet dataset is selected as the benchmark dataset.
DenseNet and MobileNet are further explained to demonstrate the model-agnostic
characteristic of the proposed method. The evolved local explanations on four
images, randomly selected from ImageNet, are presented, which show that the
evolved local explanations are straightforward to be recognised by humans.
Moreover, the evolved explanations can explain the predictions of deep
convolutional neural networks on all four images very well by successfully
capturing meaningful interpretable features of the sample images. Further
analysis based on the 30 runs of the experiments exhibits that the evolved
local explanations can also improve the probabilities/confidences of the deep
convolutional neural network models in making the predictions. The proposed
method can obtain local explanations within one minute, which is more than ten
times faster than LIME (the state-of-the-art method).
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - On the Convergence of Locally Adaptive and Scalable Diffusion-Based Sampling Methods for Deep Bayesian Neural Network Posteriors [2.3265565167163906]
Bayesian neural networks are a promising approach for modeling uncertainties in deep neural networks.
generating samples from the posterior distribution of neural networks is a major challenge.
One advance in that direction would be the incorporation of adaptive step sizes into Monte Carlo Markov chain sampling algorithms.
In this paper, we demonstrate that these methods can have a substantial bias in the distribution they sample, even in the limit of vanishing step sizes and at full batch size.
arXiv Detail & Related papers (2024-03-13T15:21:14Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - On Modifying a Neural Network's Perception [3.42658286826597]
We propose a method which allows one to modify what an artificial neural network is perceiving regarding specific human-defined concepts.
We test the proposed method on different models, assessing whether the performed manipulations are well interpreted by the models, and analyzing how they react to them.
arXiv Detail & Related papers (2023-03-05T12:09:37Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity
on Pruned Neural Networks [79.74580058178594]
We analyze the performance of training a pruned neural network by analyzing the geometric structure of the objective function.
We show that the convex region near a desirable model with guaranteed generalization enlarges as the neural network model is pruned.
arXiv Detail & Related papers (2021-10-12T01:11:07Z) - FF-NSL: Feed-Forward Neural-Symbolic Learner [70.978007919101]
This paper introduces a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FF-NSL)
FF-NSL integrates state-of-the-art ILP systems based on the Answer Set semantics, with neural networks, in order to learn interpretable hypotheses from labelled unstructured data.
arXiv Detail & Related papers (2021-06-24T15:38:34Z) - Ada-SISE: Adaptive Semantic Input Sampling for Efficient Explanation of
Convolutional Neural Networks [26.434705114982584]
We propose an efficient interpretation method for convolutional neural networks.
Experimental results show that the proposed method can reduce the execution time up to 30%.
arXiv Detail & Related papers (2021-02-15T19:10:00Z) - Generate and Verify: Semantically Meaningful Formal Analysis of Neural
Network Perception Systems [2.2559617939136505]
Testing remains to evaluate accuracy of neural network perception systems.
We employ neural network verification to prove that a model will always produce estimates within some error bound to the ground truth.
arXiv Detail & Related papers (2020-12-16T23:09:53Z) - A Bayesian Perspective on Training Speed and Model Selection [51.15664724311443]
We show that a measure of a model's training speed can be used to estimate its marginal likelihood.
We verify our results in model selection tasks for linear models and for the infinite-width limit of deep neural networks.
Our results suggest a promising new direction towards explaining why neural networks trained with gradient descent are biased towards functions that generalize well.
arXiv Detail & Related papers (2020-10-27T17:56:14Z) - How Much Can I Trust You? -- Quantifying Uncertainties in Explaining
Neural Networks [19.648814035399013]
Explainable AI (XAI) aims to provide interpretations for predictions made by learning machines, such as deep neural networks.
We propose a new framework that allows to convert any arbitrary explanation method for neural networks into an explanation method for Bayesian neural networks.
We demonstrate the effectiveness and usefulness of our approach extensively in various experiments.
arXiv Detail & Related papers (2020-06-16T08:54:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.