Neural Response Interpretation through the Lens of Critical Pathways
- URL: http://arxiv.org/abs/2103.16886v1
- Date: Wed, 31 Mar 2021 08:08:41 GMT
- Title: Neural Response Interpretation through the Lens of Critical Pathways
- Authors: Ashkan Khakzar, Soroosh Baselizadeh, Saurabh Khanduja, Christian
Rupprecht, Seong Tae Kim, Nassir Navab
- Abstract summary: We discuss the problem of identifying critical pathways and leverage them for interpreting the network's response to an input.
We demonstrate that sparse pathways derived from pruning do not necessarily encode critical input information.
To ensure sparse pathways include critical fragments of the encoded input information, we propose pathway selection via neurons' contribution to the response.
- Score: 52.41018985255681
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Is critical input information encoded in specific sparse pathways within the
neural network? In this work, we discuss the problem of identifying these
critical pathways and subsequently leverage them for interpreting the network's
response to an input. The pruning objective -- selecting the smallest group of
neurons for which the response remains equivalent to the original network --
has been previously proposed for identifying critical pathways. We demonstrate
that sparse pathways derived from pruning do not necessarily encode critical
input information. To ensure sparse pathways include critical fragments of the
encoded input information, we propose pathway selection via neurons'
contribution to the response. We proceed to explain how critical pathways can
reveal critical input features. We prove that pathways selected via neuron
contribution are locally linear (in an L2-ball), a property that we use for
proposing a feature attribution method: "pathway gradient". We validate our
interpretation method using mainstream evaluation experiments. The validation
of pathway gradient interpretation method further confirms that selected
pathways using neuron contributions correspond to critical input features. The
code is publicly available.
Related papers
- Understanding the Role of Pathways in a Deep Neural Network [4.456675543894722]
We analyze a convolutional neural network (CNN) trained in the classification task and present an algorithm to extract the diffusion pathways of individual pixels.
We find that the few largest pathways of an individual pixel from an image tend to cross the feature maps in each layer that is important for classification.
arXiv Detail & Related papers (2024-02-28T07:53:19Z) - DISCOVER: Making Vision Networks Interpretable via Competition and
Dissection [11.028520416752325]
This work contributes to post-hoc interpretability, and specifically Network Dissection.
Our goal is to present a framework that makes it easier to discover the individual functionality of each neuron in a network trained on a vision task.
arXiv Detail & Related papers (2023-10-07T21:57:23Z) - Fine-Grained Neural Network Explanation by Identifying Input Features
with Predictive Information [53.28701922632817]
We propose a method to identify features with predictive information in the input domain.
The core idea of our method is leveraging a bottleneck on the input that only lets input features associated with predictive latent features pass through.
arXiv Detail & Related papers (2021-10-04T14:13:42Z) - Adaptive conversion of real-valued input into spike trains [91.3755431537592]
This paper presents a biologically plausible method for converting real-valued input into spike trains for processing with spiking neural networks.
The proposed method mimics the adaptive behaviour of retinal ganglion cells and allows input neurons to adapt their response to changes in the statistics of the input.
arXiv Detail & Related papers (2021-04-12T12:33:52Z) - And/or trade-off in artificial neurons: impact on adversarial robustness [91.3755431537592]
Presence of sufficient number of OR-like neurons in a network can lead to classification brittleness and increased vulnerability to adversarial attacks.
We define AND-like neurons and propose measures to increase their proportion in the network.
Experimental results on the MNIST dataset suggest that our approach holds promise as a direction for further exploration.
arXiv Detail & Related papers (2021-02-15T08:19:05Z) - Where's the Question? A Multi-channel Deep Convolutional Neural Network
for Question Identification in Textual Data [83.89578557287658]
We propose a novel multi-channel deep convolutional neural network architecture, namely Quest-CNN, for the purpose of separating real questions.
We conducted a comprehensive performance comparison analysis of the proposed network against other deep neural networks.
The proposed Quest-CNN achieved the best F1 score both on a dataset of data entry-review dialogue in a dialysis care setting, and on a general domain dataset.
arXiv Detail & Related papers (2020-10-15T15:11:22Z) - Neural Anisotropy Directions [63.627760598441796]
We define neural anisotropy directions (NADs) the vectors that encapsulate the directional inductive bias of an architecture.
We show that for the CIFAR-10 dataset, NADs characterize the features used by CNNs to discriminate between different classes.
arXiv Detail & Related papers (2020-06-17T08:36:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.