A Computational Model of Learning and Memory Using Structurally Dynamic Cellular Automata
- URL: http://arxiv.org/abs/2501.06192v1
- Date: Fri, 20 Dec 2024 17:26:17 GMT
- Title: A Computational Model of Learning and Memory Using Structurally Dynamic Cellular Automata
- Authors: Jeet Singh,
- Abstract summary: This paper proposes a mathematical and computational model of learning and memory based on a small set of bio-plausible functions.
Experimental results show that the model can make near-optimal choices to re-discover a reward state after a single training run.
- Score: 0.0
- License:
- Abstract: In the fields of computation and neuroscience, much is still unknown about the underlying computations that enable key cognitive functions including learning, memory, abstraction and behavior. This paper proposes a mathematical and computational model of learning and memory based on a small set of bio-plausible functions that include coincidence detection, signal modulation, and reward/penalty mechanisms. Our theoretical approach proposes that these basic functions are sufficient to establish and modulate an information space over which computation can be carried out, generating signal gradients usable for inference and behavior. The computational method used to test this is a structurally dynamic cellular automaton with continuous-valued cell states and a series of recursive steps propagating over an undirected graph with the memory function embedded entirely in the creation and modulation of graph edges. The experimental results show: that the toy model can make near-optimal choices to re-discover a reward state after a single training run; that it can avoid complex penalty configurations; that signal modulation and network plasticity can generate exploratory behaviors in sparse reward environments; that the model generates context-dependent memory representations; and that it exhibits high computational efficiency because of its minimal, single-pass training requirements combined with flexible and contextual memory representation.
Related papers
- No Equations Needed: Learning System Dynamics Without Relying on Closed-Form ODEs [56.78271181959529]
This paper proposes a conceptual shift to modeling low-dimensional dynamical systems by departing from the traditional two-step modeling process.
Instead of first discovering a closed-form equation and then analyzing it, our approach, direct semantic modeling, predicts the semantic representation of the dynamical system.
Our approach not only simplifies the modeling pipeline but also enhances the transparency and flexibility of the resulting models.
arXiv Detail & Related papers (2025-01-30T18:36:48Z) - Evolvable Psychology Informed Neural Network for Memory Behavior Modeling [2.5258264040936305]
This paper proposes a theory informed neural networks for memory behavior modeling named PsyINN.
It constructs a framework that combines neural network with differentiating sparse regression, achieving joint optimization.
On four large-scale real-world memory behavior datasets, the proposed method surpasses the state-of-the-art methods in prediction accuracy.
arXiv Detail & Related papers (2024-08-23T01:35:32Z) - Demolition and Reinforcement of Memories in Spin-Glass-like Neural
Networks [0.0]
The aim of this thesis is to understand the effectiveness of Unlearning in both associative memory models and generative models.
The selection of structured data enables an associative memory model to retrieve concepts as attractors of a neural dynamics with considerable basins of attraction.
A novel regularization technique for Boltzmann Machines is presented, proving to outperform previously developed methods in learning hidden probability distributions from data-sets.
arXiv Detail & Related papers (2024-03-04T23:12:42Z) - Benchmarking Hebbian learning rules for associative memory [0.0]
Associative memory is a key concept in cognitive and computational brain science.
We benchmark six different learning rules on storage capacity and prototype extraction.
arXiv Detail & Related papers (2023-12-30T21:49:47Z) - Sequential Memory with Temporal Predictive Coding [6.228559238589584]
We propose a PC-based model for emphsequential memory, called emphtemporal predictive coding (tPC)
We show that our tPC models can memorize and retrieve sequential inputs accurately with a biologically plausible neural implementation.
arXiv Detail & Related papers (2023-05-19T20:03:31Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - Reservoir Memory Machines as Neural Computers [70.5993855765376]
Differentiable neural computers extend artificial neural networks with an explicit memory without interference.
We achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently.
arXiv Detail & Related papers (2020-09-14T12:01:30Z) - OccamNet: A Fast Neural Model for Symbolic Regression at Scale [11.463756755780583]
OccamNet is a neural network model that finds interpretable, compact, and sparse symbolic fits to data.
Our model defines a probability distribution over functions with efficient sampling and function evaluation.
It can identify symbolic fits for a variety of problems, including analytic and non-analytic functions, implicit functions, and simple image classification.
arXiv Detail & Related papers (2020-07-16T21:14:45Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z) - One-step regression and classification with crosspoint resistive memory
arrays [62.997667081978825]
High speed, low energy computing machines are in demand to enable real-time artificial intelligence at the edge.
One-step learning is supported by simulations of the prediction of the cost of a house in Boston and the training of a 2-layer neural network for MNIST digit recognition.
Results are all obtained in one computational step, thanks to the physical, parallel, and analog computing within the crosspoint array.
arXiv Detail & Related papers (2020-05-05T08:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.