Towards a fuller understanding of neurons with Clustered Compositional
Explanations
- URL: http://arxiv.org/abs/2310.18443v1
- Date: Fri, 27 Oct 2023 19:39:50 GMT
- Title: Towards a fuller understanding of neurons with Clustered Compositional
Explanations
- Authors: Biagio La Rosa, Leilani H. Gilpin, Roberto Capobianco
- Abstract summary: We propose a generalization, called Clustered Compositional Explanations, that combines Compositional Explanations with clustering and a novel search to approximate a broader spectrum of the neurons' behavior.
We define and address the problems connected to the application of these methods to multiple ranges of activations, analyze the insights retrievable by using our algorithm, and propose desiderata qualities that can be used to study the explanations returned by different algorithms.
- Score: 8.440673378588489
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Compositional Explanations is a method for identifying logical formulas of
concepts that approximate the neurons' behavior. However, these explanations
are linked to the small spectrum of neuron activations (i.e., the highest ones)
used to check the alignment, thus lacking completeness. In this paper, we
propose a generalization, called Clustered Compositional Explanations, that
combines Compositional Explanations with clustering and a novel search
heuristic to approximate a broader spectrum of the neurons' behavior. We define
and address the problems connected to the application of these methods to
multiple ranges of activations, analyze the insights retrievable by using our
algorithm, and propose desiderata qualities that can be used to study the
explanations returned by different algorithms.
Related papers
- Relational Composition in Neural Networks: A Survey and Call to Action [54.47858085003077]
Many neural nets appear to represent data as linear combinations of "feature vectors"
We argue that this success is incomplete without an understanding of relational composition.
arXiv Detail & Related papers (2024-07-19T20:50:57Z) - Vector-based Representation is the Key: A Study on Disentanglement and
Compositional Generalization [77.57425909520167]
We show that it is possible to achieve both good concept recognition and novel concept composition.
We propose a method to reform the scalar-based disentanglement works to be vector-based to increase both capabilities.
arXiv Detail & Related papers (2023-05-29T13:05:15Z) - Disentangling Neuron Representations with Concept Vectors [0.0]
The main contribution of this paper is a method to disentangle polysemantic neurons into concept vectors encapsulating distinct features.
Our evaluations show that the concept vectors found encode coherent, human-understandable features.
arXiv Detail & Related papers (2023-04-19T14:55:31Z) - Neural-Symbolic Recursive Machine for Systematic Generalization [113.22455566135757]
We introduce the Neural-Symbolic Recursive Machine (NSR), whose core is a Grounded Symbol System (GSS)
NSR integrates neural perception, syntactic parsing, and semantic reasoning.
We evaluate NSR's efficacy across four challenging benchmarks designed to probe systematic generalization capabilities.
arXiv Detail & Related papers (2022-10-04T13:27:38Z) - Generalizable Neuro-symbolic Systems for Commonsense Question Answering [67.72218865519493]
This chapter illustrates how suitable neuro-symbolic models for language understanding can enable domain generalizability and robustness in downstream tasks.
Different methods for integrating neural language models and knowledge graphs are discussed.
arXiv Detail & Related papers (2022-01-17T06:13:37Z) - Compositional Explanations of Neurons [52.71742655312625]
We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts.
We use this procedure to answer several questions on interpretability in models for vision and natural language processing.
arXiv Detail & Related papers (2020-06-24T20:37:05Z) - Compositional Generalization by Learning Analytical Expressions [87.15737632096378]
A memory-augmented neural model is connected with analytical expressions to achieve compositional generalization.
Experiments on the well-known benchmark SCAN demonstrate that our model seizes a great ability of compositional generalization.
arXiv Detail & Related papers (2020-06-18T15:50:57Z) - Distilling neural networks into skipgram-level decision lists [4.109840601429086]
We propose a pipeline to explain RNNs by means of decision lists (also called rules) over skipgrams.
We find that our technique persistently achieves high explanation fidelity and qualitatively interpretable rules.
arXiv Detail & Related papers (2020-05-14T16:25:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.