Activation Learning by Local Competitions
- URL: http://arxiv.org/abs/2209.13400v1
- Date: Mon, 26 Sep 2022 10:43:29 GMT
- Title: Activation Learning by Local Competitions
- Authors: Hongchao Zhou
- Abstract summary: We develop a biology-inspired learning rule that discovers features by local competitions among neurons.
It is demonstrated that the unsupervised features learned by this local learning rule can serve as a pre-training model.
- Score: 4.441866681085516
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The backpropagation that drives the success of deep learning is most likely
different from the learning mechanism of the brain. In this paper, we develop a
biology-inspired learning rule that discovers features by local competitions
among neurons, following the idea of Hebb's famous proposal. It is demonstrated
that the unsupervised features learned by this local learning rule can serve as
a pre-training model to improve the performance of some supervised learning
tasks. More importantly, this local learning rule enables us to build a new
learning paradigm very different from the backpropagation, named activation
learning, where the output activation of the neural network roughly measures
how probable the input patterns are. The activation learning is capable of
learning plentiful local features from few shots of input patterns, and
demonstrates significantly better performances than the backpropagation
algorithm when the number of training samples is relatively small. This
learning paradigm unifies unsupervised learning, supervised learning and
generative models, and is also more secure against adversarial attack, paving a
road to some possibilities of creating general-task neural networks.
Related papers
- A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - Towards Scalable Adaptive Learning with Graph Neural Networks and
Reinforcement Learning [0.0]
We introduce a flexible and scalable approach towards the problem of learning path personalization.
Our model is a sequential recommender system based on a graph neural network.
Our results demonstrate that it can learn to make good recommendations in the small-data regime.
arXiv Detail & Related papers (2023-05-10T18:16:04Z) - The least-control principle for learning at equilibrium [65.2998274413952]
We present a new principle for learning equilibrium recurrent neural networks, deep equilibrium models, or meta-learning.
Our results shed light on how the brain might learn and offer new ways of approaching a broad class of machine learning problems.
arXiv Detail & Related papers (2022-07-04T11:27:08Z) - Hebbian Continual Representation Learning [9.54473759331265]
Continual Learning aims to bring machine learning into a more realistic scenario.
We investigate whether biologically inspired Hebbian learning is useful for tackling continual challenges.
arXiv Detail & Related papers (2022-06-28T09:21:03Z) - Continual Learning with Bayesian Model based on a Fixed Pre-trained
Feature Extractor [55.9023096444383]
Current deep learning models are characterised by catastrophic forgetting of old knowledge when learning new classes.
Inspired by the process of learning new knowledge in human brains, we propose a Bayesian generative model for continual learning.
arXiv Detail & Related papers (2022-04-28T08:41:51Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - PyTorch-Hebbian: facilitating local learning in a deep learning
framework [67.67299394613426]
Hebbian local learning has shown potential as an alternative training mechanism to backpropagation.
We propose a framework for thorough and systematic evaluation of local learning rules in existing deep learning pipelines.
The framework is used to expand the Krotov-Hopfield learning rule to standard convolutional neural networks without sacrificing accuracy.
arXiv Detail & Related papers (2021-01-31T10:53:08Z) - Brain-inspired global-local learning incorporated with neuromorphic
computing [35.70151531581922]
We report a neuromorphic hybrid learning model by introducing a brain-inspired meta-learning paradigm and a differentiable spiking model incorporating neuronal dynamics and synaptic plasticity.
We demonstrate the advantages of this model in multiple different tasks, including few-shot learning, continual learning, and fault-tolerance learning in neuromorphic vision sensors.
arXiv Detail & Related papers (2020-06-05T04:24:19Z) - The large learning rate phase of deep learning: the catapult mechanism [50.23041928811575]
We present a class of neural networks with solvable training dynamics.
We find good agreement between our model's predictions and training dynamics in realistic deep learning settings.
We believe our results shed light on characteristics of models trained at different learning rates.
arXiv Detail & Related papers (2020-03-04T17:52:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.