Learning fixed points of recurrent neural networks by reparameterizing
the network model
- URL: http://arxiv.org/abs/2307.06732v2
- Date: Thu, 27 Jul 2023 09:23:48 GMT
- Title: Learning fixed points of recurrent neural networks by reparameterizing
the network model
- Authors: Vicky Zhu and Robert Rosenbaum
- Abstract summary: In computational neuroscience, fixed points of recurrent neural networks are commonly used to model neural responses to static or slowly changing stimuli.
A natural approach is to use gradient descent on the Euclidean space of synaptic weights.
We show that this approach can lead to poor learning performance due to singularities that arise in the loss surface.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In computational neuroscience, fixed points of recurrent neural networks are
commonly used to model neural responses to static or slowly changing stimuli.
These applications raise the question of how to train the weights in a
recurrent neural network to minimize a loss function evaluated on fixed points.
A natural approach is to use gradient descent on the Euclidean space of
synaptic weights. We show that this approach can lead to poor learning
performance due, in part, to singularities that arise in the loss surface. We
use a reparameterization of the recurrent network model to derive two
alternative learning rules that produces more robust learning dynamics. We show
that these learning rules can be interpreted as steepest descent and gradient
descent, respectively, under a non-Euclidean metric on the space of recurrent
weights. Our results question the common, implicit assumption that learning in
the brain should be expected to follow the negative Euclidean gradient of
synaptic weights.
Related papers
- Smooth Exact Gradient Descent Learning in Spiking Neural Networks [0.0]
We demonstrate exact gradient descent learning based on spiking dynamics that change only continuously.
Our results show how non-disruptive learning is possible despite discrete spikes.
arXiv Detail & Related papers (2023-09-25T20:51:00Z) - Decorrelating neurons using persistence [29.25969187808722]
We present two regularisation terms computed from the weights of a minimum spanning tree of a clique.
We demonstrate that naive minimisation of all correlations between neurons obtains lower accuracies than our regularisation terms.
We include a proof of differentiability of our regularisers, thus developing the first effective topological persistence-based regularisation terms.
arXiv Detail & Related papers (2023-08-09T11:09:14Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Optimal rates of approximation by shallow ReLU$^k$ neural networks and
applications to nonparametric regression [12.21422686958087]
We study the approximation capacity of some variation spaces corresponding to shallow ReLU$k$ neural networks.
For functions with less smoothness, the approximation rates in terms of the variation norm are established.
We show that shallow neural networks can achieve the minimax optimal rates for learning H"older functions.
arXiv Detail & Related papers (2023-04-04T06:35:02Z) - Benign Overfitting for Two-layer ReLU Convolutional Neural Networks [60.19739010031304]
We establish algorithm-dependent risk bounds for learning two-layer ReLU convolutional neural networks with label-flipping noise.
We show that, under mild conditions, the neural network trained by gradient descent can achieve near-zero training loss and Bayes optimal test risk.
arXiv Detail & Related papers (2023-03-07T18:59:38Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Consistency of Neural Networks with Regularization [0.0]
This paper proposes the general framework of neural networks with regularization and prove its consistency.
Two types of activation functions: hyperbolic function(Tanh) and rectified linear unit(ReLU) have been taken into consideration.
arXiv Detail & Related papers (2022-06-22T23:33:39Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - Gradient Starvation: A Learning Proclivity in Neural Networks [97.02382916372594]
Gradient Starvation arises when cross-entropy loss is minimized by capturing only a subset of features relevant for the task.
This work provides a theoretical explanation for the emergence of such feature imbalance in neural networks.
arXiv Detail & Related papers (2020-11-18T18:52:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.