Associative Memory using Attribute-Specific Neuron Groups-1: Learning between Multiple Cue Balls
- URL: http://arxiv.org/abs/2512.02319v2
- Date: Fri, 05 Dec 2025 02:16:09 GMT
- Title: Associative Memory using Attribute-Specific Neuron Groups-1: Learning between Multiple Cue Balls
- Authors: Hiroshi Inazawa,
- Abstract summary: The proposed model is based on a previous study on memory and recall of multiple images using the Cue Ball and Recall Net.<n>The system consists of three components, which are C.CB-RN for processing color, S.CB-RN for processing shape, and V.CB-RN for processing size.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, we present a new neural network model based on attribute-specific representations (e.g., color, shape, size), a classic example of associative memory. The proposed model is based on a previous study on memory and recall of multiple images using the Cue Ball and Recall Net (referred to as the CB-RN system, or simply CB-RN) [1]. The system consists of three components, which are C.CB-RN for processing color, S.CB-RN for processing shape, and V.CB-RN for processing size. When an attribute data pattern is presented to the CB-RN system, the corresponding attribute pattern of the cue neurons within the Cue Balls is associatively recalled in the Recall Net. Each image pattern presented to these CB-RN systems is represented using a two-dimensional code, specifically a QR code [2].
Related papers
- Associative Memory Model with Neural Networks: Memorizing multiple images with one neuron [0.0]
This paper presents a neural network model (associative memory model) for memory and recall of images.<n>One of the features of this model is that several different images are stored simultaneously in one neuron.<n>This model allows for complete recall of an image even when an incomplete image is presented.
arXiv Detail & Related papers (2025-10-08T00:44:46Z) - Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing [72.45257414889478]
We aim to reduce human workload by predicting connectivity between over-segmented neuron pieces.
We first construct a dataset, named FlyTracing, that contains millions of pairwise connections of segments expanding the whole fly brain.
We propose a novel connectivity-aware contrastive learning method to generate dense volumetric EM image embedding.
arXiv Detail & Related papers (2024-01-05T19:45:12Z) - Heterogenous Memory Augmented Neural Networks [84.29338268789684]
We introduce a novel heterogeneous memory augmentation approach for neural networks.
By introducing learnable memory tokens with attention mechanism, we can effectively boost performance without huge computational overhead.
We show our approach on various image and graph-based tasks under both in-distribution (ID) and out-of-distribution (OOD) conditions.
arXiv Detail & Related papers (2023-10-17T01:05:28Z) - NeRN -- Learning Neural Representations for Neural Networks [3.7384109981836153]
We show that, when adapted correctly, neural representations can be used to represent the weights of a pre-trained convolutional neural network.
Inspired by coordinate inputs of previous neural representation methods, we assign a coordinate to each convolutional kernel in our network.
We present two applications using NeRN, demonstrating the capabilities of the learned representations.
arXiv Detail & Related papers (2022-12-27T17:14:44Z) - Cross-Stitched Multi-task Dual Recursive Networks for Unified Single
Image Deraining and Desnowing [70.24489870383027]
We present the Cross-stitched Multi-task Unified Dual Recursive Network (CMUDRN) model targeting the task of unified deraining and desnowing.
The proposed model makes use of cross-stitch units that enable multi-task learning across two separate Dual Recursive Network (DRN) models.
arXiv Detail & Related papers (2022-11-15T16:44:53Z) - An associative memory model with very high memory rate: Image storage by
sequential addition learning [0.0]
This system realizes the bidirectional learning between one cue neuron in the cue ball and the neurons in the recall net.
It can memorize many patterns and recall these patterns or those that are similar at any time.
arXiv Detail & Related papers (2022-10-08T02:56:23Z) - Neural Implicit Dictionary via Mixture-of-Expert Training [111.08941206369508]
We present a generic INR framework that achieves both data and training efficiency by learning a Neural Implicit Dictionary (NID)
Our NID assembles a group of coordinate-based Impworks which are tuned to span the desired function space.
Our experiments show that, NID can improve reconstruction of 2D images or 3D scenes by 2 orders of magnitude faster with up to 98% less input data.
arXiv Detail & Related papers (2022-07-08T05:07:19Z) - Prune and distill: similar reformatting of image information along rat
visual cortex and deep neural networks [61.60177890353585]
Deep convolutional neural networks (CNNs) have been shown to provide excellent models for its functional analogue in the brain, the ventral stream in visual cortex.
Here we consider some prominent statistical patterns that are known to exist in the internal representations of either CNNs or the visual cortex.
We show that CNNs and visual cortex share a similarly tight relationship between dimensionality expansion/reduction of object representations and reformatting of image information.
arXiv Detail & Related papers (2022-05-27T08:06:40Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Recurrent neural networks that generalize from examples and optimize by
dreaming [0.0]
We introduce a generalized Hopfield network where pairwise couplings between neurons are built according to Hebb's prescription for on-line learning.
We let the network experience solely a dataset made of a sample of noisy examples for each pattern.
Remarkably, the sleeping mechanisms always significantly reduce the dataset size required to correctly generalize.
arXiv Detail & Related papers (2022-04-17T08:40:54Z) - A Novel ANN Structure for Image Recognition [0.0]
The paper presents Multi-layer Auto Resonance Networks (ARN), a new neural model, for image recognition.
Neurons in ARN, called Nodes, latch on to an incoming pattern and resonate when the input is within its 'coverage'
arXiv Detail & Related papers (2020-10-09T14:07:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.