Associative Memory Model with Neural Networks: Memorizing multiple images with one neuron
- URL: http://arxiv.org/abs/2510.06542v1
- Date: Wed, 08 Oct 2025 00:44:46 GMT
- Title: Associative Memory Model with Neural Networks: Memorizing multiple images with one neuron
- Authors: Hiroshi Inazawa,
- Abstract summary: This paper presents a neural network model (associative memory model) for memory and recall of images.<n>One of the features of this model is that several different images are stored simultaneously in one neuron.<n>This model allows for complete recall of an image even when an incomplete image is presented.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper presents a neural network model (associative memory model) for memory and recall of images. In this model, only a single neuron can memorize multi-images and when that neuron is activated, it is possible to recall all the memorized images at the same time. The system is composed of a single cluster of numerous neurons, referred to as the "Cue Ball," and multiple neural network layers, collectively called the "Recall Net." One of the features of this model is that several different images are stored simultaneously in one neuron, and by presenting one of the images stored in that neuron, all stored images are recalled. Furthermore, this model allows for complete recall of an image even when an incomplete image is presented
Related papers
- Associative Memory using Attribute-Specific Neuron Groups-1: Learning between Multiple Cue Balls [0.0]
The proposed model is based on a previous study on memory and recall of multiple images using the Cue Ball and Recall Net.<n>The system consists of three components, which are C.CB-RN for processing color, S.CB-RN for processing shape, and V.CB-RN for processing size.
arXiv Detail & Related papers (2025-12-02T01:28:45Z) - Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing [72.45257414889478]
We aim to reduce human workload by predicting connectivity between over-segmented neuron pieces.
We first construct a dataset, named FlyTracing, that contains millions of pairwise connections of segments expanding the whole fly brain.
We propose a novel connectivity-aware contrastive learning method to generate dense volumetric EM image embedding.
arXiv Detail & Related papers (2024-01-05T19:45:12Z) - An associative memory model with very high memory rate: Image storage by
sequential addition learning [0.0]
This system realizes the bidirectional learning between one cue neuron in the cue ball and the neurons in the recall net.
It can memorize many patterns and recall these patterns or those that are similar at any time.
arXiv Detail & Related papers (2022-10-08T02:56:23Z) - Memory via Temporal Delays in weightless Spiking Neural Network [0.08399688944263842]
We present a prototype for weightless spiking neural networks that can perform a simple classification task.
The memory in this network is stored in the timing between neurons, rather than the strength of the connection.
arXiv Detail & Related papers (2022-02-15T02:09:33Z) - Associative Memories via Predictive Coding [37.59398215921529]
Associative memories in the brain receive and store patterns of activity registered by the sensory neurons.
We present a novel neural model for realizing associative memories based on a hierarchical generative network that receives external stimuli via sensory neurons.
arXiv Detail & Related papers (2021-09-16T15:46:26Z) - Astrocytes mediate analogous memory in a multi-layer neuron-astrocytic
network [52.77024349608834]
We show how a piece of information can be maintained as a robust activity pattern for several seconds then completely disappear if no other stimuli come.
This kind of short-term memory can keep operative information for seconds, then completely forget it to avoid overlapping with forthcoming patterns.
We show how arbitrary patterns can be loaded, then stored for a certain interval of time, and retrieved if the appropriate clue pattern is applied to the input.
arXiv Detail & Related papers (2021-08-31T16:13:15Z) - Hierarchical Associative Memory [2.66512000865131]
Associative Memories or Modern Hopfield Networks have many appealing properties.
They can do pattern completion, store a large number of memories, and can be described using a recurrent neural network.
This paper tackles a gap and describes a fully recurrent model of associative memory with an arbitrary large number of layers.
arXiv Detail & Related papers (2021-07-14T01:38:40Z) - Hidden Markov Modeling for Maximum Likelihood Neuron Reconstruction [3.6321891270689055]
Recent advances in brain clearing and imaging have made it possible to image entire mammalian brains at sub-micron resolution.
These images offer the potential to assemble brain-wide atlases of projection neuron morphology, but manual neuron reconstruction remains a bottleneck.
Here we present a method inspired by hidden Markov modeling and appearance modeling of fluorescent neuron images that can automatically trace neuronal processes.
arXiv Detail & Related papers (2021-06-04T20:24:56Z) - Reservoir Memory Machines as Neural Computers [70.5993855765376]
Differentiable neural computers extend artificial neural networks with an explicit memory without interference.
We achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently.
arXiv Detail & Related papers (2020-09-14T12:01:30Z) - Neural Sparse Representation for Image Restoration [116.72107034624344]
Inspired by the robustness and efficiency of sparse coding based image restoration models, we investigate the sparsity of neurons in deep networks.
Our method structurally enforces sparsity constraints upon hidden neurons.
Experiments show that sparse representation is crucial in deep neural networks for multiple image restoration tasks.
arXiv Detail & Related papers (2020-06-08T05:15:17Z) - Self-Attentive Associative Memory [69.40038844695917]
We propose to separate the storage of individual experiences (item memory) and their occurring relationships (relational memory)
We achieve competitive results with our proposed two-memory model in a diversity of machine learning tasks.
arXiv Detail & Related papers (2020-02-10T03:27:48Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.