Autonomous Learning with High-Dimensional Computing Architecture Similar to von Neumann's
- URL: http://arxiv.org/abs/2503.23608v1
- Date: Sun, 30 Mar 2025 22:20:08 GMT
- Title: Autonomous Learning with High-Dimensional Computing Architecture Similar to von Neumann's
- Authors: Pentti Kanerva,
- Abstract summary: We model human and animal learning by computing with high-dimensional vectors (H = 10,000 for example)<n>The architecture resembles traditional (von Neumann) computing with numbers, but the instructions refer to vectors and operate on them in superposition.<n>The model's ability to learn from data reminds us of deep learning, but with an architecture closer to biology.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We model human and animal learning by computing with high-dimensional vectors (H = 10,000 for example). The architecture resembles traditional (von Neumann) computing with numbers, but the instructions refer to vectors and operate on them in superposition. The architecture includes a high-capacity memory for vectors, analogue of the random-access memory (RAM) for numbers. The model's ability to learn from data reminds us of deep learning, but with an architecture closer to biology. The architecture agrees with an idea from psychology that human memory and learning involve a short-term working memory and a long-term data store. Neuroscience provides us with a model of the long-term memory, namely, the cortex of the cerebellum. With roots in psychology, biology, and traditional computing, a theory of computing with vectors can help us understand how brains compute. Application to learning by robots seems inevitable, but there is likely to be more, including language. Ultimately we want to compute with no more material and energy than used by brains. To that end, we need a mathematical theory that agrees with psychology and biology, and is suitable for nanotechnology. We also need to exercise the theory in large-scale experiments. Computing with vectors is described here in terms familiar to us from traditional computing with numbers.
Related papers
- Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - AI for Mathematics: A Cognitive Science Perspective [86.02346372284292]
Mathematics is one of the most powerful conceptual systems developed and used by the human species.
Rapid progress in AI, particularly propelled by advances in large language models (LLMs), has sparked renewed, widespread interest in building such systems.
arXiv Detail & Related papers (2023-10-19T02:00:31Z) - Neuromorphic Artificial Intelligence Systems [58.1806704582023]
Modern AI systems, based on von Neumann architecture and classical neural networks, have a number of fundamental limitations in comparison with the brain.
This article discusses such limitations and the ways they can be mitigated.
It presents an overview of currently available neuromorphic AI projects in which these limitations are overcome.
arXiv Detail & Related papers (2022-05-25T20:16:05Z) - A Robust Learning Rule for Soft-Bounded Memristive Synapses Competitive
with Supervised Learning in Standard Spiking Neural Networks [0.0]
A view in theoretical neuroscience sees the brain as a function-computing device.
Being able to approximate functions is a fundamental axiom to build upon for future brain research.
In this work we apply a novel supervised learning algorithm - based on controlling niobium-doped strontium titanate memristive synapses - to learning non-trivial multidimensional functions.
arXiv Detail & Related papers (2022-04-12T10:21:22Z) - CogNGen: Constructing the Kernel of a Hyperdimensional Predictive
Processing Cognitive Architecture [79.07468367923619]
We present a new cognitive architecture that combines two neurobiologically plausible, computational models.
We aim to develop a cognitive architecture that has the power of modern machine learning techniques.
arXiv Detail & Related papers (2022-03-31T04:44:28Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - Memory and attention in deep learning [19.70919701635945]
Memory construction for machine is inevitable.
Recent progresses on modeling memory in deep learning have revolved around external memory constructions.
The aim of this thesis is to advance the understanding on memory and attention in deep learning.
arXiv Detail & Related papers (2021-07-03T09:21:13Z) - Neuromorphic Computing is Turing-Complete [0.0]
Neuromorphic computing is a non-von Neumann computing paradigm that performs computation by emulating the human brain.
Neuromorphic systems are extremely energy-efficient and known to consume thousands of times less power than CPU and GPU.
We devise neuromorphic circuits for computing all the mu-recursive functions and all the mu-recursive operators.
arXiv Detail & Related papers (2021-04-28T19:25:01Z) - Neurocoder: Learning General-Purpose Computation Using Stored Neural
Programs [64.56890245622822]
Neurocoder is an entirely new class of general-purpose conditional computational machines.
It "codes" itself in a data-responsive way by composing relevant programs from a set of shareable, modular programs.
We show new capacity to learn modular programs, handle severe pattern shifts and remember old programs as new ones are learnt.
arXiv Detail & Related papers (2020-09-24T01:39:16Z) - On the computational power and complexity of Spiking Neural Networks [0.0]
We introduce spiking neural networks as a machine model where---in contrast to the familiar Turing machine---information and the manipulation thereof are co-located in the machine.
We introduce canonical problems, define hierarchies of complexity classes and provide some first completeness results.
arXiv Detail & Related papers (2020-01-23T10:40:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.