On the Value of Labeled Data and Symbolic Methods for Hidden Neuron Activation Analysis
- URL: http://arxiv.org/abs/2404.13567v1
- Date: Sun, 21 Apr 2024 07:57:45 GMT
- Title: On the Value of Labeled Data and Symbolic Methods for Hidden Neuron Activation Analysis
- Authors: Abhilekha Dalal, Rushrukh Rayan, Adrita Barua, Eugene Y. Vasserman, Md Kamruzzaman Sarker, Pascal Hitzler,
- Abstract summary: State of the art indicates that hidden node activations can, in some cases, be interpretable in a way that makes sense to humans.
We introduce a novel model-agnostic post-hoc Explainable AI method demonstrating that it provides meaningful interpretations.
- Score: 1.55858752644861
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A major challenge in Explainable AI is in correctly interpreting activations of hidden neurons: accurate interpretations would help answer the question of what a deep learning system internally detects as relevant in the input, demystifying the otherwise black-box nature of deep learning systems. The state of the art indicates that hidden node activations can, in some cases, be interpretable in a way that makes sense to humans, but systematic automated methods that would be able to hypothesize and verify interpretations of hidden neuron activations are underexplored. This is particularly the case for approaches that can both draw explanations from substantial background knowledge, and that are based on inherently explainable (symbolic) methods. In this paper, we introduce a novel model-agnostic post-hoc Explainable AI method demonstrating that it provides meaningful interpretations. Our approach is based on using a Wikipedia-derived concept hierarchy with approximately 2 million classes as background knowledge, and utilizes OWL-reasoning-based Concept Induction for explanation generation. Additionally, we explore and compare the capabilities of off-the-shelf pre-trained multimodal-based explainable methods. Our results indicate that our approach can automatically attach meaningful class expressions as explanations to individual neurons in the dense layer of a Convolutional Neural Network. Evaluation through statistical analysis and degree of concept activation in the hidden layer show that our method provides a competitive edge in both quantitative and qualitative aspects compared to prior work.
Related papers
- Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Understanding CNN Hidden Neuron Activations Using Structured Background
Knowledge and Deductive Reasoning [3.6223658572137825]
State of the art indicates that hidden node activations can, in some cases, be interpretable in a way that makes sense to humans.
We show that we can automatically attach meaningful labels from the background knowledge to individual neurons in the dense layer of a Convolutional Neural Network.
arXiv Detail & Related papers (2023-08-08T02:28:50Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Adversarial Attacks on the Interpretation of Neuron Activation
Maximization [70.5472799454224]
Activation-maximization approaches are used to interpret and analyze trained deep-learning models.
In this work, we consider the concept of an adversary manipulating a model for the purpose of deceiving the interpretation.
arXiv Detail & Related papers (2023-06-12T19:54:33Z) - Explaining Deep Learning Hidden Neuron Activations using Concept
Induction [3.6223658572137825]
State of the art indicates that hidden node activations appear to be interpretable in a way that makes sense to humans.
We show that we can automatically attach meaningful labels from the background knowledge to individual neurons in the dense layer of a Convolutional Neural Network.
arXiv Detail & Related papers (2023-01-23T18:14:32Z) - NeuroExplainer: Fine-Grained Attention Decoding to Uncover Cortical
Development Patterns of Preterm Infants [73.85768093666582]
We propose an explainable geometric deep network dubbed NeuroExplainer.
NeuroExplainer is used to uncover altered infant cortical development patterns associated with preterm birth.
arXiv Detail & Related papers (2023-01-01T12:48:12Z) - An Interpretable Neuron Embedding for Static Knowledge Distillation [7.644253344815002]
We propose a new interpretable neural network method, by embedding neurons into the semantic space.
The proposed semantic vector externalizes the latent knowledge to static knowledge, which is easy to exploit.
Empirical experiments of visualization show that semantic vectors describe neuron activation semantics well.
arXiv Detail & Related papers (2022-11-14T03:26:10Z) - Neural Activation Patterns (NAPs): Visual Explainability of Learned
Concepts [8.562628320010035]
We present a method that takes into account the entire activation distribution.
By extracting similar activation profiles within the high-dimensional activation space of a neural network layer, we find groups of inputs that are treated similarly.
These input groups represent neural activation patterns (NAPs) and can be used to visualize and interpret learned layer concepts.
arXiv Detail & Related papers (2022-06-20T09:05:57Z) - Discriminative Attribution from Counterfactuals [64.94009515033984]
We present a method for neural network interpretability by combining feature attribution with counterfactual explanations.
We show that this method can be used to quantitatively evaluate the performance of feature attribution methods in an objective manner.
arXiv Detail & Related papers (2021-09-28T00:53:34Z) - Robust Explainability: A Tutorial on Gradient-Based Attribution Methods
for Deep Neural Networks [1.5854438418597576]
We present gradient-based interpretability methods for explaining decisions of deep neural networks.
We discuss the role that adversarial robustness plays in having meaningful explanations.
We conclude with the future directions for research in the area at the convergence of robustness and explainability.
arXiv Detail & Related papers (2021-07-23T18:06:29Z) - Compositional Explanations of Neurons [52.71742655312625]
We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts.
We use this procedure to answer several questions on interpretability in models for vision and natural language processing.
arXiv Detail & Related papers (2020-06-24T20:37:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.