The Method for Storing Patterns in Neural Networks-Memorization and Recall of QR code Patterns-
- URL: http://arxiv.org/abs/2504.06631v1
- Date: Wed, 09 Apr 2025 07:09:40 GMT
- Title: The Method for Storing Patterns in Neural Networks-Memorization and Recall of QR code Patterns-
- Authors: Hiroshi Inazawa,
- Abstract summary: We propose a mechanism for storing complex patterns within a neural network and subsequently recalling them.<n>The advantage of storing patterns in a neural network lies in its ability to recall the original pattern even when an incomplete version is presented.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper, we propose a mechanism for storing complex patterns within a neural network and subsequently recalling them. This model is based on our work published in 2018(Inazawa, 2018), which we have refined and extended in this work. With the recent advancements in deep learning and large language model (LLM)-based AI technologies (generative AI), it can be considered that methodologies for the learning are becoming increasingly well-established. In the future, we expect to see further research on memory using models based on Transformers (Vaswani, et. al., 2017, Rae, et. al., 2020), but in this paper we propose a simpler and more powerful model of memory and recall in neural networks. The advantage of storing patterns in a neural network lies in its ability to recall the original pattern even when an incomplete version is presented. The patterns we have produced for use in this study have been QR code (DENSO WAVE, 1994), which has become widely used as an information transmission tool in recent years.
Related papers
- Variational autoencoder-based neural network model compression [4.992476489874941]
Variational Autoencoders (VAEs), as a form of deep generative model, have been widely used in recent years.
This paper aims to explore neural network model compression method based on VAE.
arXiv Detail & Related papers (2024-08-25T09:06:22Z) - Meta-Learning in Spiking Neural Networks with Reward-Modulated STDP [2.179313476241343]
We propose a bio-plausible meta-learning model inspired by the hippocampus and the prefrontal cortex.
Our new model can easily be applied to spike-based neuromorphic devices and enables fast learning in neuromorphic hardware.
arXiv Detail & Related papers (2023-06-07T13:08:46Z) - Cooperative data-driven modeling [44.99833362998488]
Data-driven modeling in mechanics is evolving rapidly based on recent machine learning advances.
New data and models created by different groups become available, opening possibilities for cooperative modeling.
Artificial neural networks suffer from catastrophic forgetting, i.e. they forget how to perform an old task when trained on a new one.
This hinders cooperation because adapting an existing model for a new task affects the performance on a previous task trained by someone else.
arXiv Detail & Related papers (2022-11-23T14:27:25Z) - OLLA: Decreasing the Memory Usage of Neural Networks by Optimizing the
Lifetime and Location of Arrays [6.418232942455968]
OLLA is an algorithm that optimize the lifetime and memory location of the tensors used to train neural networks.
We present several techniques to simplify the encoding of the problem, and enable our approach to scale to the size of state-of-the-art neural networks.
arXiv Detail & Related papers (2022-10-24T02:39:13Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Transfer Learning with Deep Tabular Models [66.67017691983182]
We show that upstream data gives tabular neural networks a decisive advantage over GBDT models.
We propose a realistic medical diagnosis benchmark for tabular transfer learning.
We propose a pseudo-feature method for cases where the upstream and downstream feature sets differ.
arXiv Detail & Related papers (2022-06-30T14:24:32Z) - Function Regression using Spiking DeepONet [2.935661780430872]
We present an SNN-based method to perform regression, which has been a challenge due to the inherent difficulty in representing a function's input domain and continuous output values as spikes.
We use a DeepONet - neural network designed to learn operators - to learn the behavior of spikes.
We propose several methods to use a DeepONet in the spiking framework, and present accuracy and training time for different benchmarks.
arXiv Detail & Related papers (2022-05-17T15:22:22Z) - Embracing New Techniques in Deep Learning for Estimating Image
Memorability [0.0]
We propose and evaluate five alternative deep learning models to predict image memorability.
Our findings suggest that the key prior memorability network had overstated its generalizability and was overfit on its training set.
We make our new state-of-the-art model readily available to the research community, allowing memory researchers to make predictions about memorability on a wider range of images.
arXiv Detail & Related papers (2021-05-21T23:05:23Z) - Reservoir Memory Machines as Neural Computers [70.5993855765376]
Differentiable neural computers extend artificial neural networks with an explicit memory without interference.
We achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently.
arXiv Detail & Related papers (2020-09-14T12:01:30Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.