Identifying and interpreting tuning dimensions in deep networks
- URL: http://arxiv.org/abs/2011.03043v2
- Date: Tue, 8 Dec 2020 00:01:04 GMT
- Title: Identifying and interpreting tuning dimensions in deep networks
- Authors: Nolan S. Dey and J. Eric Taylor and Bryan P. Tripp and Alexander Wong
and Graham W. Taylor
- Abstract summary: tuning dimension is a stimulus attribute that accounts for much of the activation variance of a group of neurons.
This work contributes an unsupervised framework for identifying and interpreting "tuning dimensions" in deep networks.
- Score: 83.59965686504822
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In neuroscience, a tuning dimension is a stimulus attribute that accounts for
much of the activation variance of a group of neurons. These are commonly used
to decipher the responses of such groups. While researchers have attempted to
manually identify an analogue to these tuning dimensions in deep neural
networks, we are unaware of an automatic way to discover them. This work
contributes an unsupervised framework for identifying and interpreting "tuning
dimensions" in deep networks. Our method correctly identifies the tuning
dimensions of a synthetic Gabor filter bank and tuning dimensions of the first
two layers of InceptionV1 trained on ImageNet.
Related papers
- Residual Random Neural Networks [0.0]
Single-layer feedforward neural network with random weights is a recurring motif in the neural networks literature.
We show that one can obtain good classification results even if the number of hidden neurons has the same order of magnitude as the dimensionality of the data samples.
arXiv Detail & Related papers (2024-10-25T22:00:11Z) - Verified Neural Compressed Sensing [58.98637799432153]
We develop the first (to the best of our knowledge) provably correct neural networks for a precise computational task.
We show that for modest problem dimensions (up to 50), we can train neural networks that provably recover a sparse vector from linear and binarized linear measurements.
We show that the complexity of the network can be adapted to the problem difficulty and solve problems where traditional compressed sensing methods are not known to provably work.
arXiv Detail & Related papers (2024-05-07T12:20:12Z) - Exploring Geometry of Blind Spots in Vision Models [56.47644447201878]
We study the phenomenon of under-sensitivity in vision models such as CNNs and Transformers.
We propose a Level Set Traversal algorithm that iteratively explores regions of high confidence with respect to the input space.
We estimate the extent of these connected higher-dimensional regions over which the model maintains a high degree of confidence.
arXiv Detail & Related papers (2023-10-30T18:00:33Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Deep learning neural network for approaching Schr\"odinger problems with
arbitrary two-dimensional confinement [0.0]
This article presents an approach to the two-dimensional Schr"odinger equation based on automatic learning methods with neural networks.
It is intended to determine the ground state of a particle confined in any two-dimensional potential, starting from the knowledge of the solutions to a large number of arbitrary sample problems.
arXiv Detail & Related papers (2023-04-03T19:48:33Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Explainable Deep Belief Network based Auto encoder using novel Extended
Garson Algorithm [6.228766191647919]
We develop an algorithm to explain Deep Belief Network based Auto-encoder (DBNA)
It is used to determine the contribution of each input feature in the DBN.
Important features identified by this method are compared against those obtained by Wald chi square (chi2)
arXiv Detail & Related papers (2022-07-18T10:44:02Z) - A singular Riemannian geometry approach to Deep Neural Networks II.
Reconstruction of 1-D equivalence classes [78.120734120667]
We build the preimage of a point in the output manifold in the input space.
We focus for simplicity on the case of neural networks maps from n-dimensional real spaces to (n - 1)-dimensional real spaces.
arXiv Detail & Related papers (2021-12-17T11:47:45Z) - Similarity and Matching of Neural Network Representations [0.0]
We employ a toolset -- dubbed Dr. Frankenstein -- to analyse the similarity of representations in deep neural networks.
We aim to match the activations on given layers of two trained neural networks by joining them with a stitching layer.
arXiv Detail & Related papers (2021-10-27T17:59:46Z) - On Tractable Representations of Binary Neural Networks [23.50970665150779]
We consider the compilation of a binary neural network's decision function into tractable representations such as Ordered Binary Decision Diagrams (OBDDs) and Sentential Decision Diagrams (SDDs)
In experiments, we show that it is feasible to obtain compact representations of neural networks as SDDs.
arXiv Detail & Related papers (2020-04-05T03:21:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.