Ada-SISE: Adaptive Semantic Input Sampling for Efficient Explanation of
Convolutional Neural Networks
- URL: http://arxiv.org/abs/2102.07799v1
- Date: Mon, 15 Feb 2021 19:10:00 GMT
- Title: Ada-SISE: Adaptive Semantic Input Sampling for Efficient Explanation of
Convolutional Neural Networks
- Authors: Mahesh Sudhakar, Sam Sattarzadeh, Konstantinos N. Plataniotis,
Jongseong Jang, Yeonjeong Jeong, Hyunwoo Kim
- Abstract summary: We propose an efficient interpretation method for convolutional neural networks.
Experimental results show that the proposed method can reduce the execution time up to 30%.
- Score: 26.434705114982584
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explainable AI (XAI) is an active research area to interpret a neural
network's decision by ensuring transparency and trust in the task-specified
learned models. Recently, perturbation-based model analysis has shown better
interpretation, but backpropagation techniques are still prevailing because of
their computational efficiency. In this work, we combine both approaches as a
hybrid visual explanation algorithm and propose an efficient interpretation
method for convolutional neural networks. Our method adaptively selects the
most critical features that mainly contribute towards a prediction to probe the
model by finding the activated features. Experimental results show that the
proposed method can reduce the execution time up to 30% while enhancing
competitive interpretability without compromising the quality of explanation
generated.
Related papers
- Improving Network Interpretability via Explanation Consistency Evaluation [56.14036428778861]
We propose a framework that acquires more explainable activation heatmaps and simultaneously increase the model performance.
Specifically, our framework introduces a new metric, i.e., explanation consistency, to reweight the training samples adaptively in model learning.
Our framework then promotes the model learning by paying closer attention to those training samples with a high difference in explanations.
arXiv Detail & Related papers (2024-08-08T17:20:08Z) - An Interpretable Alternative to Neural Representation Learning for Rating Prediction -- Transparent Latent Class Modeling of User Reviews [8.392465185798713]
We present a transparent probabilistic model that organizes user and product latent classes based on the review information.
We evaluate our results in terms of both capacity for interpretability and predictive performances in comparison with popular text-based neural approaches.
arXiv Detail & Related papers (2024-06-17T07:07:42Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Learning Interpretable Deep Disentangled Neural Networks for
Hyperspectral Unmixing [16.02193274044797]
We propose a new interpretable deep learning method for hyperspectral unmixing that accounts for nonlinearity and endmember variability.
The model is learned end-to-end using backpropagation, and trained using a self-supervised strategy.
Experimental results on synthetic and real datasets illustrate the performance of the proposed method.
arXiv Detail & Related papers (2023-10-03T18:21:37Z) - Online Network Source Optimization with Graph-Kernel MAB [62.6067511147939]
We propose Grab-UCB, a graph- kernel multi-arms bandit algorithm to learn online the optimal source placement in large scale networks.
We describe the network processes with an adaptive graph dictionary model, which typically leads to sparse spectral representations.
We derive the performance guarantees that depend on network parameters, which further influence the learning curve of the sequential decision strategy.
arXiv Detail & Related papers (2023-07-07T15:03:42Z) - Adversarial Attacks on the Interpretation of Neuron Activation
Maximization [70.5472799454224]
Activation-maximization approaches are used to interpret and analyze trained deep-learning models.
In this work, we consider the concept of an adversary manipulating a model for the purpose of deceiving the interpretation.
arXiv Detail & Related papers (2023-06-12T19:54:33Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Consistent feature selection for neural networks via Adaptive Group
Lasso [3.42658286826597]
We propose and establish a theoretical guarantee for the use of the adaptive group for selecting important features of neural networks.
Specifically, we show that our feature selection method is consistent for single-output feed-forward neural networks with one hidden layer and hyperbolic tangent activation function.
arXiv Detail & Related papers (2020-05-30T18:50:56Z) - Parallelization Techniques for Verifying Neural Networks [52.917845265248744]
We introduce an algorithm based on the verification problem in an iterative manner and explore two partitioning strategies.
We also introduce a highly parallelizable pre-processing algorithm that uses the neuron activation phases to simplify the neural network verification problems.
arXiv Detail & Related papers (2020-04-17T20:21:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.