Faster Biological Gradient Descent Learning
- URL: http://arxiv.org/abs/2009.12745v1
- Date: Sun, 27 Sep 2020 05:26:56 GMT
- Title: Faster Biological Gradient Descent Learning
- Authors: Ho Ling Li
- Abstract summary: Back-propagation is a popular machine learning algorithm that uses gradient descent in training neural networks for supervised learning.
We have come up with a simple and local gradient descent optimization algorithm that can reduce training time.
Our algorithm is found to speed up learning, particularly for small networks.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Back-propagation is a popular machine learning algorithm that uses gradient
descent in training neural networks for supervised learning, but can be very
slow. A number of algorithms have been developed to speed up convergence and
improve robustness of the learning. However, they are complicated to implement
biologically as they require information from previous updates. Inspired by
synaptic competition in biology, we have come up with a simple and local
gradient descent optimization algorithm that can reduce training time, with no
demand on past details. Our algorithm, named dynamic learning rate (DLR), works
similarly to the traditional gradient descent used in back-propagation, except
that instead of having a uniform learning rate across all synapses, the
learning rate depends on the current neuronal connection weights. Our algorithm
is found to speed up learning, particularly for small networks.
Related papers
- A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - Neuromorphic Online Learning for Spatiotemporal Patterns with a
Forward-only Timeline [5.094970748243019]
Spiking neural networks (SNNs) are bio-plausible computing models with high energy efficiency.
Backpropagation Through Time (BPTT) is traditionally used to train SNNs.
We present Spatiotemporal Online Learning for Synaptic Adaptation (SOLSA), specifically designed for online learning of SNNs.
arXiv Detail & Related papers (2023-07-21T02:47:03Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - Online Training Through Time for Spiking Neural Networks [66.7744060103562]
Spiking neural networks (SNNs) are promising brain-inspired energy-efficient models.
Recent progress in training methods has enabled successful deep SNNs on large-scale tasks with low latency.
We propose online training through time (OTTT) for SNNs, which is derived from BPTT to enable forward-in-time learning.
arXiv Detail & Related papers (2022-10-09T07:47:56Z) - Thalamus: a brain-inspired algorithm for biologically-plausible
continual learning and disentangled representations [0.0]
Animals thrive in a constantly changing environment and leverage the temporal structure to learn causal representations.
We introduce a simple algorithm that uses optimization at inference time to generate internal representations of temporal context.
We show that a network trained on a series of tasks using traditional weight updates can infer tasks dynamically.
We then alternate between the weight updates and the latent updates to arrive at Thalamus, a task-agnostic algorithm capable of discovering disentangled representations in a stream of unlabeled tasks.
arXiv Detail & Related papers (2022-05-24T01:29:21Z) - Brain-Inspired Learning on Neuromorphic Substrates [5.279475826661643]
This article provides a mathematical framework for the design of practical online learning algorithms for neuromorphic substrates.
Specifically, we show a direct connection between Real-Time Recurrent Learning (RTRL) and biologically plausible learning rules for training Spiking Neural Networks (SNNs)
We motivate a sparse approximation based on block-diagonal Jacobians, which reduces the algorithm's computational complexity.
arXiv Detail & Related papers (2020-10-22T17:56:59Z) - MAP Propagation Algorithm: Faster Learning with a Team of Reinforcement
Learning Agents [0.0]
An alternative way of training an artificial neural network is through treating each unit in the network as a reinforcement learning agent.
We propose a novel algorithm called MAP propagation to reduce this variance significantly.
Our work thus allows for the broader application of the teams of agents in deep reinforcement learning.
arXiv Detail & Related papers (2020-10-15T17:17:39Z) - Optimizing Memory Placement using Evolutionary Graph Reinforcement
Learning [56.83172249278467]
We introduce Evolutionary Graph Reinforcement Learning (EGRL), a method designed for large search spaces.
We train and validate our approach directly on the Intel NNP-I chip for inference.
We additionally achieve 28-78% speed-up compared to the native NNP-I compiler on all three workloads.
arXiv Detail & Related papers (2020-07-14T18:50:12Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Gradient-only line searches to automatically determine learning rates
for a variety of stochastic training algorithms [0.0]
We study the application of the Gradient-Only Line Search that is Inexact (GOLS-I) to determine the learning rate schedule for a selection of popular neural network training algorithms.
GOLS-I's learning rate schedules are competitive with manually tuned learning rates, over seven optimization algorithms, three types of neural network architecture, 23 datasets and two loss functions.
arXiv Detail & Related papers (2020-06-29T08:59:31Z) - AdaS: Adaptive Scheduling of Stochastic Gradients [50.80697760166045]
We introduce the notions of textit"knowledge gain" and textit"mapping condition" and propose a new algorithm called Adaptive Scheduling (AdaS)
Experimentation reveals that, using the derived metrics, AdaS exhibits: (a) faster convergence and superior generalization over existing adaptive learning methods; and (b) lack of dependence on a validation set to determine when to stop training.
arXiv Detail & Related papers (2020-06-11T16:36:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.