Multi-Neuron Representations of Hierarchical Concepts in Spiking Neural Networks
- URL: http://arxiv.org/abs/2401.04628v2
- Date: Thu, 11 Apr 2024 15:43:23 GMT
- Title: Multi-Neuron Representations of Hierarchical Concepts in Spiking Neural Networks
- Authors: Nancy A. Lynch,
- Abstract summary: We describe how hierarchical concepts can be represented in three types of layered neural networks.
The aim is to support recognition of the concepts when partial information about the concepts is presented, and also when some neurons in the network might fail.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We describe how hierarchical concepts can be represented in three types of layered neural networks. The aim is to support recognition of the concepts when partial information about the concepts is presented, and also when some of the neurons in the network might fail. Our failure model involves initial random failures. The three types of networks are: feed-forward networks with high connectivity, feed-forward networks with low connectivity, and layered networks with low connectivity and with both forward edges and "lateral" edges within layers. In order to achieve fault-tolerance, the representations all use multiple representative neurons for each concept. We show how recognition can work in all three of these settings, and quantify how the probability of correct recognition depends on several parameters, including the number of representatives and the neuron failure probability. We also discuss how these representations might be learned, in all three types of networks. For the feed-forward networks, the learning algorithms are similar to ones used in [4], whereas for networks with lateral edges, the algorithms are generally inspired by work on the assembly calculus [3, 6, 7].
Related papers
- Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural
Networks [49.808194368781095]
We show that three-layer neural networks have provably richer feature learning capabilities than two-layer networks.
This work makes progress towards understanding the provable benefit of three-layer neural networks over two-layer networks in the feature learning regime.
arXiv Detail & Related papers (2023-05-11T17:19:30Z) - Learning Hierarchically-Structured Concepts II: Overlapping Concepts,
and Networks With Feedback [4.847980206213334]
In Lynch and Mallmann-Trenn (Neural Networks, 2021), we considered simple tree-structured concepts and feed-forward layered networks.
Here we extend the model in two ways: we allow limited overlap between children of different concepts, and we allow networks to include feedback edges.
We describe and analyze algorithms for recognition and algorithms for learning.
arXiv Detail & Related papers (2023-04-19T10:11:29Z) - Rank Diminishing in Deep Neural Networks [71.03777954670323]
Rank of neural networks measures information flowing across layers.
It is an instance of a key structural condition that applies across broad domains of machine learning.
For neural networks, however, the intrinsic mechanism that yields low-rank structures remains vague and unclear.
arXiv Detail & Related papers (2022-06-13T12:03:32Z) - Quasi-orthogonality and intrinsic dimensions as measures of learning and
generalisation [55.80128181112308]
We show that dimensionality and quasi-orthogonality of neural networks' feature space may jointly serve as network's performance discriminants.
Our findings suggest important relationships between the networks' final performance and properties of their randomly initialised feature spaces.
arXiv Detail & Related papers (2022-03-30T21:47:32Z) - From Common Sense Reasoning to Neural Network Models through Multiple
Preferences: an overview [0.0]
We discuss the relationships between conditional and preferential logics and neural network models.
We propose a concept-wise multipreference semantics, recently introduced for defeasible description logics.
The paper describes the general approach, through the cases of Self-Organising Maps and Multilayer Perceptrons.
arXiv Detail & Related papers (2021-07-10T16:25:19Z) - Understanding the Role of Individual Units in a Deep Neural Network [85.23117441162772]
We present an analytic framework to systematically identify hidden units within image classification and image generation networks.
First, we analyze a convolutional neural network (CNN) trained on scene classification and discover units that match a diverse set of object concepts.
Second, we use a similar analytic method to analyze a generative adversarial network (GAN) model trained to generate scenes.
arXiv Detail & Related papers (2020-09-10T17:59:10Z) - The Representation Theory of Neural Networks [7.724617675868718]
We show that neural networks can be represented via the mathematical theory of quiver representations.
We show that network quivers gently adapt to common neural network concepts.
We also provide a quiver representation model to understand how a neural network creates representations from the data.
arXiv Detail & Related papers (2020-07-23T19:02:14Z) - A Rigorous Framework for the Mean Field Limit of Multilayer Neural
Networks [9.89901717499058]
We develop a mathematically rigorous framework for embedding neural networks in the mean field regime.
As the network's widths increase, the network's learning trajectory is shown to be well captured by a limit.
We prove several properties of large-width multilayer networks.
arXiv Detail & Related papers (2020-01-30T16:43:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.