Hierarchical Associative Memory
- URL: http://arxiv.org/abs/2107.06446v1
- Date: Wed, 14 Jul 2021 01:38:40 GMT
- Title: Hierarchical Associative Memory
- Authors: Dmitry Krotov
- Abstract summary: Associative Memories or Modern Hopfield Networks have many appealing properties.
They can do pattern completion, store a large number of memories, and can be described using a recurrent neural network.
This paper tackles a gap and describes a fully recurrent model of associative memory with an arbitrary large number of layers.
- Score: 2.66512000865131
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dense Associative Memories or Modern Hopfield Networks have many appealing
properties of associative memory. They can do pattern completion, store a large
number of memories, and can be described using a recurrent neural network with
a degree of biological plausibility and rich feedback between the neurons. At
the same time, up until now all the models of this class have had only one
hidden layer, and have only been formulated with densely connected network
architectures, two aspects that hinder their machine learning applications.
This paper tackles this gap and describes a fully recurrent model of
associative memory with an arbitrary large number of layers, some of which can
be locally connected (convolutional), and a corresponding energy function that
decreases on the dynamical trajectory of the neurons' activations. The memories
of the full network are dynamically "assembled" using primitives encoded in the
synaptic weights of the lower layers, with the "assembling rules" encoded in
the synaptic weights of the higher layers. In addition to the bottom-up
propagation of information, typical of commonly used feedforward neural
networks, the model described has rich top-down feedback from higher layers
that help the lower-layer neurons to decide on their response to the input
stimuli.
Related papers
- Storing overlapping associative memories on latent manifolds in low-rank spiking networks [5.041384008847852]
We revisit the associative memory problem in light of advances in understanding spike-based computation.
We show that the spiking activity for a large class of all-inhibitory networks is situated on a low-dimensional, convex, and piecewise-linear manifold.
We propose several learning rules, and demonstrate a linear scaling of the storage capacity with the number of neurons, as well as robust pattern completion abilities.
arXiv Detail & Related papers (2024-11-26T14:48:25Z) - Unsupervised representation learning with Hebbian synaptic and structural plasticity in brain-like feedforward neural networks [0.0]
We introduce and evaluate a brain-like neural network model capable of unsupervised representation learning.
The model was tested on a diverse set of popular machine learning benchmarks.
arXiv Detail & Related papers (2024-06-07T08:32:30Z) - Spiking representation learning for associative memories [0.0]
We introduce a novel artificial spiking neural network (SNN) that performs unsupervised representation learning and associative memory operations.
The architecture of our model derives from the neocortical columnar organization and combines feedforward projections for learning hidden representations and recurrent projections for forming associative memories.
arXiv Detail & Related papers (2024-06-05T08:30:11Z) - Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - WLD-Reg: A Data-dependent Within-layer Diversity Regularizer [98.78384185493624]
Neural networks are composed of multiple layers arranged in a hierarchical structure jointly trained with a gradient-based optimization.
We propose to complement this traditional 'between-layer' feedback with additional 'within-layer' feedback to encourage the diversity of the activations within the same layer.
We present an extensive empirical study confirming that the proposed approach enhances the performance of several state-of-the-art neural network models in multiple tasks.
arXiv Detail & Related papers (2023-01-03T20:57:22Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Cross-Frequency Coupling Increases Memory Capacity in Oscillatory Neural
Networks [69.42260428921436]
Cross-frequency coupling (CFC) is associated with information integration across populations of neurons.
We construct a model of CFC which predicts a computational role for observed $theta - gamma$ oscillatory circuits in the hippocampus and cortex.
We show that the presence of CFC increases the memory capacity of a population of neurons connected by plastic synapses.
arXiv Detail & Related papers (2022-04-05T17:13:36Z) - Memory via Temporal Delays in weightless Spiking Neural Network [0.08399688944263842]
We present a prototype for weightless spiking neural networks that can perform a simple classification task.
The memory in this network is stored in the timing between neurons, rather than the strength of the connection.
arXiv Detail & Related papers (2022-02-15T02:09:33Z) - Reservoir Memory Machines as Neural Computers [70.5993855765376]
Differentiable neural computers extend artificial neural networks with an explicit memory without interference.
We achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently.
arXiv Detail & Related papers (2020-09-14T12:01:30Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z) - Triple Memory Networks: a Brain-Inspired Method for Continual Learning [35.40452724755021]
A neural network adjusts its parameters when learning a new task, but then fails to conduct the old tasks well.
The brain has a powerful ability to continually learn new experience without catastrophic interference.
Inspired by such brain strategy, we propose a novel approach named triple memory networks (TMNs) for continual learning.
arXiv Detail & Related papers (2020-03-06T11:35:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.