A Step Towards Uncovering The Structure of Multistable Neural Networks
- URL: http://arxiv.org/abs/2210.03241v1
- Date: Thu, 6 Oct 2022 22:54:17 GMT
- Title: A Step Towards Uncovering The Structure of Multistable Neural Networks
- Authors: Magnus Tournoy and Brent Doiron
- Abstract summary: We study the structure of multistable recurrent neural networks.
The activation function is simplified by a nonsmooth Heaviside step function.
We derive how multistability is encoded within the network architecture.
- Score: 1.14219428942199
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the structure of multistable recurrent neural networks. The
activation function is simplified by a nonsmooth Heaviside step function. This
nonlinearity partitions the phase space into regions with different, yet linear
dynamics. We derive how multistability is encoded within the network
architecture. Stable states are identified by their semipositivity constraints
on the synaptic weight matrix. The restrictions can be separated by their
effects on the signs or the strengths of the connections. Exact results on
network topology, sign stability, weight matrix factorization, pattern
completion and pattern coupling are derived and proven. These may lay the
foundation of more complex recurrent neural networks and neurocomputing.
Related papers
- Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Semantic Loss Functions for Neuro-Symbolic Structured Prediction [74.18322585177832]
We discuss the semantic loss, which injects knowledge about such structure, defined symbolically, into training.
It is agnostic to the arrangement of the symbols, and depends only on the semantics expressed thereby.
It can be combined with both discriminative and generative neural models.
arXiv Detail & Related papers (2024-05-12T22:18:25Z) - Expressivity of Spiking Neural Networks [15.181458163440634]
We study the capabilities of spiking neural networks where information is encoded in the firing time of neurons.
In contrast to ReLU networks, we prove that spiking neural networks can realize both continuous and discontinuous functions.
arXiv Detail & Related papers (2023-08-16T08:45:53Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Vanilla Feedforward Neural Networks as a Discretization of Dynamical Systems [9.382423715831687]
In this paper, we back to the classical network structure and prove that the vanilla feedforward networks could also be a numerical discretization of dynamic systems.
Our results could provide a new perspective for understanding the approximation properties of feedforward neural networks.
arXiv Detail & Related papers (2022-09-22T10:32:08Z) - Stability Analysis of Fractional Order Memristor Synapse-coupled
Hopfield Neural Network with Ring Structure [0.0]
We first present a fractional-order memristor synapse-coupling Hopfield neural network on two neurons.
We extend the model to a neural network with a ring structure that consists of n sub-network neurons, increasing the synchronization in the network.
In the n-neuron case, it is revealed that the stability depends on the structure and number of sub-networks.
arXiv Detail & Related papers (2021-09-29T12:33:23Z) - Convolutional Filtering and Neural Networks with Non Commutative
Algebras [153.20329791008095]
We study the generalization of non commutative convolutional neural networks.
We show that non commutative convolutional architectures can be stable to deformations on the space of operators.
arXiv Detail & Related papers (2021-08-23T04:22:58Z) - Geometry Perspective Of Estimating Learning Capability Of Neural
Networks [0.0]
The paper considers a broad class of neural networks with generalized architecture performing simple least square regression with gradient descent (SGD)
The relationship between the generalization capability with the stability of the neural network has also been discussed.
By correlating the principles of high-energy physics with the learning theory of neural networks, the paper establishes a variant of the Complexity-Action conjecture from an artificial neural network perspective.
arXiv Detail & Related papers (2020-11-03T12:03:19Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Phase diagram for two-layer ReLU neural networks at infinite-width limit [6.380166265263755]
We draw the phase diagram for the two-layer ReLU neural network at the infinite-width limit.
We identify three regimes in the phase diagram, i.e., linear regime, critical regime and condensed regime.
In the linear regime, NN training dynamics is approximately linear similar to a random feature model with an exponential loss decay.
In the condensed regime, we demonstrate through experiments that active neurons are condensed at several discrete orientations.
arXiv Detail & Related papers (2020-07-15T06:04:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.