Convergence of a New Learning Algorithm
- URL: http://arxiv.org/abs/2202.12829v1
- Date: Tue, 8 Feb 2022 19:36:43 GMT
- Title: Convergence of a New Learning Algorithm
- Authors: Feng Lin
- Abstract summary: A new learning algorithm has been shown to be mathematically equivalent to the conventional back-propagation learning algorithm.
A convergence measure is proposed to measure the convergence rate of the new algorithm.
- Score: 3.679678618747414
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A new learning algorithm proposed by Brandt and Lin for neural network [1],
[2] has been shown to be mathematically equivalent to the conventional
back-propagation learning algorithm, but has several advantages over the
backpropagation algorithm, including feedback-network-free implementation and
biological plausibility. In this paper, we investigate the convergence of the
new algorithm. A necessary and sufficient condition for the algorithm to
converge is derived. A convergence measure is proposed to measure the
convergence rate of the new algorithm. Simulation studies are conducted to
investigate the convergence of the algorithm with respect to the number of
neurons, the connection distance, the connection density, the ratio of
excitatory/inhibitory synapses, the membrane potentials, and the synapse
strengths.
Related papers
- Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - Contraction-Guided Adaptive Partitioning for Reachability Analysis of
Neural Network Controlled Systems [5.359060261460183]
We present a contraction-guided adaptive partitioning algorithm for improving interval-valued reachable set estimates in a nonlinear feedback loop.
By leveraging a decoupling of the neural network verification step and reachability partitioning layers, the algorithm can provide accuracy improvements for little computational cost.
We report a sizable improvement in the accuracy of reachable set estimation in a fraction of the runtime as compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-04-07T14:43:21Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - On the Convergence of Distributed Stochastic Bilevel Optimization
Algorithms over a Network [55.56019538079826]
Bilevel optimization has been applied to a wide variety of machine learning models.
Most existing algorithms restrict their single-machine setting so that they are incapable of handling distributed data.
We develop novel decentralized bilevel optimization algorithms based on a gradient tracking communication mechanism and two different gradients.
arXiv Detail & Related papers (2022-06-30T05:29:52Z) - Scalable computation of prediction intervals for neural networks via
matrix sketching [79.44177623781043]
Existing algorithms for uncertainty estimation require modifying the model architecture and training procedure.
This work proposes a new algorithm that can be applied to a given trained neural network and produces approximate prediction intervals.
arXiv Detail & Related papers (2022-05-06T13:18:31Z) - FedGiA: An Efficient Hybrid Algorithm for Federated Learning [39.777714678378736]
Federated learning has shown its advances recently but is still facing many challenges.
We propose a hybrid federated learning algorithm (FedGiA) that combines the gradient descent and the inexact alternating direction method of multipliers.
arXiv Detail & Related papers (2022-05-03T11:56:33Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - Fast Reinforcement Learning with Incremental Gaussian Mixture Models [0.0]
An online and incremental algorithm capable of learning from a single pass through data, called Incremental Gaussian Mixture Network (IGMN), was employed as a sample-efficient function approximator for the joint state and Q-values space.
Results are analyzed to explain the properties of the obtained algorithm, and it is observed that the use of the IGMN function approximator brings some important advantages to reinforcement learning in relation to conventional neural networks trained by gradient descent methods.
arXiv Detail & Related papers (2020-11-02T03:18:15Z) - SONIA: A Symmetric Blockwise Truncated Optimization Algorithm [2.9923891863939938]
This work presents a new algorithm for empirical risk.
The algorithm bridges the gap between first- and second-order search methods by computing a second-order search-type update in one subspace, coupled with a scaled steepest descent step in the Theoretical complement.
arXiv Detail & Related papers (2020-06-06T19:28:14Z) - Parallelization Techniques for Verifying Neural Networks [52.917845265248744]
We introduce an algorithm based on the verification problem in an iterative manner and explore two partitioning strategies.
We also introduce a highly parallelizable pre-processing algorithm that uses the neuron activation phases to simplify the neural network verification problems.
arXiv Detail & Related papers (2020-04-17T20:21:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.