New Insights on Learning Rules for Hopfield Networks: Memory and
Objective Function Minimisation
- URL: http://arxiv.org/abs/2010.01472v1
- Date: Sun, 4 Oct 2020 03:02:40 GMT
- Title: New Insights on Learning Rules for Hopfield Networks: Memory and
Objective Function Minimisation
- Authors: Pavel Tolmachev and Jonathan H. Manton
- Abstract summary: We take a new look at learning rules, exhibiting them as descent-type algorithms for various cost functions.
We discuss the role of biases (the external inputs) in the learning process in Hopfield networks.
- Score: 1.7006003864727404
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hopfield neural networks are a possible basis for modelling associative
memory in living organisms. After summarising previous studies in the field, we
take a new look at learning rules, exhibiting them as descent-type algorithms
for various cost functions. We also propose several new cost functions suitable
for learning. We discuss the role of biases (the external inputs) in the
learning process in Hopfield networks. Furthermore, we apply Newtons method for
learning memories, and experimentally compare the performances of various
learning rules. Finally, to add to the debate whether allowing connections of a
neuron to itself enhances memory capacity, we numerically investigate the
effects of self coupling.
Keywords: Hopfield Networks, associative memory, content addressable memory,
learning rules, gradient descent, attractor networks
Related papers
- Sequential Learning in the Dense Associative Memory [1.2289361708127877]
We investigate the performance of the Dense Associative Memory in sequential learning problems.
We show that existing sequential learning methods can be applied to the Dense Associative Memory to improve sequential learning performance.
arXiv Detail & Related papers (2024-09-24T04:23:00Z) - Unsupervised representation learning with Hebbian synaptic and structural plasticity in brain-like feedforward neural networks [0.0]
We introduce and evaluate a brain-like neural network model capable of unsupervised representation learning.
The model was tested on a diverse set of popular machine learning benchmarks.
arXiv Detail & Related papers (2024-06-07T08:32:30Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - A Sparse Quantized Hopfield Network for Online-Continual Memory [0.0]
Nervous systems learn online where a stream of noisy data points are presented in a non-independent, identically distributed (non-i.i.d.) way.
Deep networks, on the other hand, typically use non-local learning algorithms and are trained in an offline, non-noisy, i.i.d. setting.
We implement this kind of model in a novel neural network called the Sparse Quantized Hopfield Network (SQHN)
arXiv Detail & Related papers (2023-07-27T17:46:17Z) - Measures of Information Reflect Memorization Patterns [53.71420125627608]
We show that the diversity in the activation patterns of different neurons is reflective of model generalization and memorization.
Importantly, we discover that information organization points to the two forms of memorization, even for neural activations computed on unlabelled in-distribution examples.
arXiv Detail & Related papers (2022-10-17T20:15:24Z) - Benchmarking Compositionality with Formal Languages [64.09083307778951]
We investigate whether large neural models in NLP can acquire the ability tocombining primitive concepts into larger novel combinations while learning from data.
By randomly sampling over many transducers, we explore which of their properties contribute to learnability of a compositional relation by a neural network.
We find that the models either learn the relations completely or not at all. The key is transition coverage, setting a soft learnability limit at 400 examples per transition.
arXiv Detail & Related papers (2022-08-17T10:03:18Z) - Biological learning in key-value memory networks [0.45880283710344055]
Memory-augmented neural networks in machine learning commonly use a key-value mechanism to store and read out memories in a single step.
We propose an implementation of basic key-value memory that stores inputs using a combination of biologically plausible three-factor plasticity rules.
Our results suggest a compelling alternative to the classical Hopfield network as a model of biological long-term memory.
arXiv Detail & Related papers (2021-10-26T19:26:53Z) - The Connection Between Approximation, Depth Separation and Learnability
in Neural Networks [70.55686685872008]
We study the connection between learnability and approximation capacity.
We show that learnability with deep networks of a target function depends on the ability of simpler classes to approximate the target.
arXiv Detail & Related papers (2021-01-31T11:32:30Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z) - Finding online neural update rules by learning to remember [3.295767453921912]
We investigate learning of the online local update rules for neural activations (bodies) and weights (synapses) from scratch.
Different neuron types are represented by different embedding vectors which allows the same two functions to be used for all neurons.
We train for this objective using short term back-propagation and analyze the performance as a function of both the different network types and the difficulty of the problem.
arXiv Detail & Related papers (2020-03-06T10:31:30Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.