A More Biologically Plausible Local Learning Rule for ANNs
- URL: http://arxiv.org/abs/2011.12012v1
- Date: Tue, 24 Nov 2020 10:35:47 GMT
- Title: A More Biologically Plausible Local Learning Rule for ANNs
- Authors: Shashi Kant Gupta
- Abstract summary: The proposed learning rule is derived from the concepts of spike timing dependant plasticity and neuronal association.
A preliminary evaluation done on the binary classification of MNIST and IRIS datasets shows comparable performance with backpropagation.
The local nature of learning gives a possibility of large scale distributed and parallel learning in the network.
- Score: 6.85316573653194
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The backpropagation algorithm is often debated for its biological
plausibility. However, various learning methods for neural architecture have
been proposed in search of more biologically plausible learning. Most of them
have tried to solve the "weight transport problem" and try to propagate errors
backward in the architecture via some alternative methods. In this work, we
investigated a slightly different approach that uses only the local information
which captures spike timing information with no propagation of errors. The
proposed learning rule is derived from the concepts of spike timing dependant
plasticity and neuronal association. A preliminary evaluation done on the
binary classification of MNIST and IRIS datasets with two hidden layers shows
comparable performance with backpropagation. The model learned using this
method also shows a possibility of better adversarial robustness against the
FGSM attack compared to the model learned through backpropagation of
cross-entropy loss. The local nature of learning gives a possibility of large
scale distributed and parallel learning in the network. And finally, the
proposed method is a more biologically sound method that can probably help in
understanding how biological neurons learn different abstractions.
Related papers
- Seeing Unseen: Discover Novel Biomedical Concepts via
Geometry-Constrained Probabilistic Modeling [53.7117640028211]
We present a geometry-constrained probabilistic modeling treatment to resolve the identified issues.
We incorporate a suite of critical geometric properties to impose proper constraints on the layout of constructed embedding space.
A spectral graph-theoretic method is devised to estimate the number of potential novel classes.
arXiv Detail & Related papers (2024-03-02T00:56:05Z) - Finding Interpretable Class-Specific Patterns through Efficient Neural
Search [43.454121220860564]
We propose a novel, inherently interpretable binary neural network architecture DNAPS that extracts differential patterns from data.
DiffNaps is scalable to hundreds of thousands of features and robust to noise.
We show on synthetic and real world data, including three biological applications, that, unlike its competitors, DiffNaps consistently yields accurate, succinct, and interpretable class descriptions.
arXiv Detail & Related papers (2023-12-07T14:09:18Z) - Unsupervised Learning of Invariance Transformations [105.54048699217668]
We develop an algorithmic framework for finding approximate graph automorphisms.
We discuss how this framework can be used to find approximate automorphisms in weighted graphs in general.
arXiv Detail & Related papers (2023-07-24T17:03:28Z) - Learning efficient backprojections across cortical hierarchies in real
time [1.6474865533365743]
We introduce a bio-plausible method to learn efficient feedback weights in layered cortical hierarchies.
All weights are learned simultaneously with always-on plasticity and using only information locally available to the synapses.
Our method is applicable to a wide class of models and improves on previously known biologically plausible ways of credit assignment.
arXiv Detail & Related papers (2022-12-20T13:54:04Z) - Neurosymbolic hybrid approach to driver collision warning [64.02492460600905]
There are two main algorithmic approaches to autonomous driving systems.
Deep learning alone has achieved state-of-the-art results in many areas.
But sometimes it can be very difficult to debug if the deep learning model doesn't work.
arXiv Detail & Related papers (2022-03-28T20:29:50Z) - Towards Scaling Difference Target Propagation by Learning Backprop
Targets [64.90165892557776]
Difference Target Propagation is a biologically-plausible learning algorithm with close relation with Gauss-Newton (GN) optimization.
We propose a novel feedback weight training scheme that ensures both that DTP approximates BP and that layer-wise feedback weight training can be restored.
We report the best performance ever achieved by DTP on CIFAR-10 and ImageNet.
arXiv Detail & Related papers (2022-01-31T18:20:43Z) - BioLeaF: A Bio-plausible Learning Framework for Training of Spiking
Neural Networks [4.698975219970009]
We propose a new bio-plausible learning framework consisting of two components: a new architecture, and its supporting learning rules.
Under our microcircuit architecture, we employ the Spike-Timing-Dependent-Plasticity (STDP) rule operating in local compartments to update synaptic weights.
Our experiments show that the proposed framework demonstrates learning accuracy comparable to BP-based rules.
arXiv Detail & Related papers (2021-11-14T10:32:22Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Predictive Coding Can Do Exact Backpropagation on Any Neural Network [40.51949948934705]
We generalize (IL and) Z-IL by directly defining them on computational graphs.
This is the first biologically plausible algorithm that is shown to be equivalent to BP in the way of updating parameters on any neural network.
arXiv Detail & Related papers (2021-03-08T11:52:51Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z) - Biologically-Motivated Deep Learning Method using Hierarchical
Competitive Learning [0.0]
I propose to introduce unsupervised competitive learning which only requires forward propagating signals as a pre-training method for CNNs.
The proposed method could be useful for a variety of poorly labeled data, for example, time series or medical data.
arXiv Detail & Related papers (2020-01-04T20:07:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.