PyTorch-Hebbian: facilitating local learning in a deep learning
framework
- URL: http://arxiv.org/abs/2102.00428v1
- Date: Sun, 31 Jan 2021 10:53:08 GMT
- Title: PyTorch-Hebbian: facilitating local learning in a deep learning
framework
- Authors: Jules Talloen, Joni Dambre, Alexander Vandesompele
- Abstract summary: Hebbian local learning has shown potential as an alternative training mechanism to backpropagation.
We propose a framework for thorough and systematic evaluation of local learning rules in existing deep learning pipelines.
The framework is used to expand the Krotov-Hopfield learning rule to standard convolutional neural networks without sacrificing accuracy.
- Score: 67.67299394613426
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recently, unsupervised local learning, based on Hebb's idea that change in
synaptic efficacy depends on the activity of the pre- and postsynaptic neuron
only, has shown potential as an alternative training mechanism to
backpropagation. Unfortunately, Hebbian learning remains experimental and
rarely makes it way into standard deep learning frameworks. In this work, we
investigate the potential of Hebbian learning in the context of standard deep
learning workflows. To this end, a framework for thorough and systematic
evaluation of local learning rules in existing deep learning pipelines is
proposed. Using this framework, the potential of Hebbian learned feature
extractors for image classification is illustrated. In particular, the
framework is used to expand the Krotov-Hopfield learning rule to standard
convolutional neural networks without sacrificing accuracy compared to
end-to-end backpropagation. The source code is available at
https://github.com/Joxis/pytorch-hebbian.
Related papers
- A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - Activation Learning by Local Competitions [4.441866681085516]
We develop a biology-inspired learning rule that discovers features by local competitions among neurons.
It is demonstrated that the unsupervised features learned by this local learning rule can serve as a pre-training model.
arXiv Detail & Related papers (2022-09-26T10:43:29Z) - Transfer Learning with Deep Tabular Models [66.67017691983182]
We show that upstream data gives tabular neural networks a decisive advantage over GBDT models.
We propose a realistic medical diagnosis benchmark for tabular transfer learning.
We propose a pseudo-feature method for cases where the upstream and downstream feature sets differ.
arXiv Detail & Related papers (2022-06-30T14:24:32Z) - Hebbian Continual Representation Learning [9.54473759331265]
Continual Learning aims to bring machine learning into a more realistic scenario.
We investigate whether biologically inspired Hebbian learning is useful for tackling continual challenges.
arXiv Detail & Related papers (2022-06-28T09:21:03Z) - Hebbian learning with gradients: Hebbian convolutional neural networks
with modern deep learning frameworks [2.7666483899332643]
We show that Hebbian learning in hierarchical, convolutional neural networks can be implemented almost trivially with modern deep learning frameworks.
We build Hebbian convolutional multi-layer networks for object recognition.
arXiv Detail & Related papers (2021-07-04T20:50:49Z) - Local Critic Training for Model-Parallel Learning of Deep Neural
Networks [94.69202357137452]
We propose a novel model-parallel learning method, called local critic training.
We show that the proposed approach successfully decouples the update process of the layer groups for both convolutional neural networks (CNNs) and recurrent neural networks (RNNs)
We also show that trained networks by the proposed method can be used for structural optimization.
arXiv Detail & Related papers (2021-02-03T09:30:45Z) - LoCo: Local Contrastive Representation Learning [93.98029899866866]
We show that by overlapping local blocks stacking on top of each other, we effectively increase the decoder depth and allow upper blocks to implicitly send feedbacks to lower blocks.
This simple design closes the performance gap between local learning and end-to-end contrastive learning algorithms for the first time.
arXiv Detail & Related papers (2020-08-04T05:41:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.