Hebbian Continual Representation Learning
- URL: http://arxiv.org/abs/2207.04874v1
- Date: Tue, 28 Jun 2022 09:21:03 GMT
- Title: Hebbian Continual Representation Learning
- Authors: Pawe{\l} Morawiecki, Andrii Krutsylo, Maciej Wo{\l}czyk, Marek
\'Smieja
- Abstract summary: Continual Learning aims to bring machine learning into a more realistic scenario.
We investigate whether biologically inspired Hebbian learning is useful for tackling continual challenges.
- Score: 9.54473759331265
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Continual Learning aims to bring machine learning into a more realistic
scenario, where tasks are learned sequentially and the i.i.d. assumption is not
preserved. Although this setting is natural for biological systems, it proves
very difficult for machine learning models such as artificial neural networks.
To reduce this performance gap, we investigate the question whether
biologically inspired Hebbian learning is useful for tackling continual
challenges. In particular, we highlight a realistic and often overlooked
unsupervised setting, where the learner has to build representations without
any supervision. By combining sparse neural networks with Hebbian learning
principle, we build a simple yet effective alternative (HebbCL) to typical
neural network models trained via the gradient descent. Due to Hebbian
learning, the network have easily interpretable weights, which might be
essential in critical application such as security or healthcare. We
demonstrate the efficacy of HebbCL in an unsupervised learning setting applied
to MNIST and Omniglot datasets. We also adapt the algorithm to the supervised
scenario and obtain promising results in the class-incremental learning.
Related papers
- A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - Dynamics of Supervised and Reinforcement Learning in the Non-Linear Perceptron [3.069335774032178]
We use a dataset-process approach to derive flow equations describing learning.
We characterize the effects of the learning rule (supervised or reinforcement learning, SL/RL) and input-data distribution on the perceptron's learning curve.
This approach points a way toward analyzing learning dynamics for more-complex circuit architectures.
arXiv Detail & Related papers (2024-09-05T17:58:28Z) - Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Neuro-mimetic Task-free Unsupervised Online Learning with Continual
Self-Organizing Maps [56.827895559823126]
Self-organizing map (SOM) is a neural model often used in clustering and dimensionality reduction.
We propose a generalization of the SOM, the continual SOM, which is capable of online unsupervised learning under a low memory budget.
Our results, on benchmarks including MNIST, Kuzushiji-MNIST, and Fashion-MNIST, show almost a two times increase in accuracy.
arXiv Detail & Related papers (2024-02-19T19:11:22Z) - Activation Learning by Local Competitions [4.441866681085516]
We develop a biology-inspired learning rule that discovers features by local competitions among neurons.
It is demonstrated that the unsupervised features learned by this local learning rule can serve as a pre-training model.
arXiv Detail & Related papers (2022-09-26T10:43:29Z) - Transfer Learning with Deep Tabular Models [66.67017691983182]
We show that upstream data gives tabular neural networks a decisive advantage over GBDT models.
We propose a realistic medical diagnosis benchmark for tabular transfer learning.
We propose a pseudo-feature method for cases where the upstream and downstream feature sets differ.
arXiv Detail & Related papers (2022-06-30T14:24:32Z) - aSTDP: A More Biologically Plausible Learning [0.0]
We introduce approximate STDP, a new neural networks learning framework.
It uses only STDP rules for supervised and unsupervised learning.
It can make predictions or generate patterns in one model without additional configuration.
arXiv Detail & Related papers (2022-05-22T08:12:50Z) - Learning to Modulate Random Weights: Neuromodulation-inspired Neural
Networks For Efficient Continual Learning [1.9580473532948401]
We introduce a novel neural network architecture inspired by neuromodulation in biological nervous systems.
We show that this approach has strong learning performance per task despite the very small number of learnable parameters.
arXiv Detail & Related papers (2022-04-08T21:12:13Z) - Learning Bayesian Sparse Networks with Full Experience Replay for
Continual Learning [54.7584721943286]
Continual Learning (CL) methods aim to enable machine learning models to learn new tasks without catastrophic forgetting of those that have been previously mastered.
Existing CL approaches often keep a buffer of previously-seen samples, perform knowledge distillation, or use regularization techniques towards this goal.
We propose to only activate and select sparse neurons for learning current and past tasks at any stage.
arXiv Detail & Related papers (2022-02-21T13:25:03Z) - Reasoning-Modulated Representations [85.08205744191078]
We study a common setting where our task is not purely opaque.
Our approach paves the way for a new class of data-efficient representation learning.
arXiv Detail & Related papers (2021-07-19T13:57:13Z) - Hebbian learning with gradients: Hebbian convolutional neural networks
with modern deep learning frameworks [2.7666483899332643]
We show that Hebbian learning in hierarchical, convolutional neural networks can be implemented almost trivially with modern deep learning frameworks.
We build Hebbian convolutional multi-layer networks for object recognition.
arXiv Detail & Related papers (2021-07-04T20:50:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.