Planning with Biological Neurons and Synapses
- URL: http://arxiv.org/abs/2112.08186v2
- Date: Thu, 16 Dec 2021 15:33:17 GMT
- Title: Planning with Biological Neurons and Synapses
- Authors: Francesco d'Amore, Daniel Mitropolsky, Pierluigi Crescenzi, Emanuele
Natale, Christos H. Papadimitriou
- Abstract summary: We revisit the planning problem in the blocks world, and we implement a known for this task.
We believe that this is the first algorithm of its kind.
The input is a sequence of symbols encoding an initial set of block stacks as well as a target set, and the output is a sequence of motion commands such as "put the top block in stack 1 on the table"
- Score: 4.2873412319680035
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We revisit the planning problem in the blocks world, and we implement a known
heuristic for this task. Importantly, our implementation is biologically
plausible, in the sense that it is carried out exclusively through the spiking
of neurons. Even though much has been accomplished in the blocks world over the
past five decades, we believe that this is the first algorithm of its kind. The
input is a sequence of symbols encoding an initial set of block stacks as well
as a target set, and the output is a sequence of motion commands such as "put
the top block in stack 1 on the table". The program is written in the Assembly
Calculus, a recently proposed computational framework meant to model
computation in the brain by bridging the gap between neural activity and
cognitive function. Its elementary objects are assemblies of neurons (stable
sets of neurons whose simultaneous firing signifies that the subject is
thinking of an object, concept, word, etc.), its commands include project and
merge, and its execution model is based on widely accepted tenets of
neuroscience. A program in this framework essentially sets up a dynamical
system of neurons and synapses that eventually, with high probability,
accomplishes the task. The purpose of this work is to establish empirically
that reasonably large programs in the Assembly Calculus can execute correctly
and reliably; and that rather realistic -- if idealized -- higher cognitive
functions, such as planning in the blocks world, can be implemented
successfully by such programs.
Related papers
- Cogito, ergo sum: A Neurobiologically-Inspired Cognition-Memory-Growth System for Code Generation [9.920563105290894]
Cogito is a neurobiologically inspired multi-agent framework to enhance the problem-solving capabilities in code generation tasks with lower cost.
Cogito accumulates knowledge and cognitive skills at each stage,ultimately forming a Super Role an all capable agent to perform the code generation task.
arXiv Detail & Related papers (2025-01-30T01:41:44Z) - Compositional Generalization Across Distributional Shifts with Sparse Tree Operations [77.5742801509364]
We introduce a unified neurosymbolic architecture called the Differentiable Tree Machine.
We significantly increase the model's efficiency through the use of sparse vector representations of symbolic structures.
We enable its application beyond the restricted set of tree2tree problems to the more general class of seq2seq problems.
arXiv Detail & Related papers (2024-12-18T17:20:19Z) - No One-Size-Fits-All Neurons: Task-based Neurons for Artificial Neural Networks [25.30801109401654]
Since the human brain is a task-based neuron user, can the artificial network design go from the task-based architecture design to the task-based neuron design?
We propose a two-step framework for prototyping task-based neurons.
Experiments show that the proposed task-based neuron design is not only feasible but also delivers competitive performance over other state-of-the-art models.
arXiv Detail & Related papers (2024-05-03T09:12:46Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Redundancy and Concept Analysis for Code-trained Language Models [5.726842555987591]
Code-trained language models have proven to be highly effective for various code intelligence tasks.
They can be challenging to train and deploy for many software engineering applications due to computational bottlenecks and memory constraints.
We perform the first neuron-level analysis for source code models to identify textitimportant neurons within latent representations.
arXiv Detail & Related papers (2023-05-01T15:22:41Z) - A Neural Lambda Calculus: Neurosymbolic AI meets the foundations of
computing and functional programming [0.0]
We will analyze the ability of neural networks to learn how to execute programs as a whole.
We will introduce the use of integrated neural learning and calculi formalization.
arXiv Detail & Related papers (2023-04-18T20:30:16Z) - Neuromorphic Artificial Intelligence Systems [58.1806704582023]
Modern AI systems, based on von Neumann architecture and classical neural networks, have a number of fundamental limitations in comparison with the brain.
This article discusses such limitations and the ways they can be mitigated.
It presents an overview of currently available neuromorphic AI projects in which these limitations are overcome.
arXiv Detail & Related papers (2022-05-25T20:16:05Z) - Neurocompositional computing: From the Central Paradox of Cognition to a
new generation of AI systems [120.297940190903]
Recent progress in AI has resulted from the use of limited forms of neurocompositional computing.
New, deeper forms of neurocompositional computing create AI systems that are more robust, accurate, and comprehensible.
arXiv Detail & Related papers (2022-05-02T18:00:10Z) - A Robust Learning Rule for Soft-Bounded Memristive Synapses Competitive
with Supervised Learning in Standard Spiking Neural Networks [0.0]
A view in theoretical neuroscience sees the brain as a function-computing device.
Being able to approximate functions is a fundamental axiom to build upon for future brain research.
In this work we apply a novel supervised learning algorithm - based on controlling niobium-doped strontium titanate memristive synapses - to learning non-trivial multidimensional functions.
arXiv Detail & Related papers (2022-04-12T10:21:22Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - Neurocoder: Learning General-Purpose Computation Using Stored Neural
Programs [64.56890245622822]
Neurocoder is an entirely new class of general-purpose conditional computational machines.
It "codes" itself in a data-responsive way by composing relevant programs from a set of shareable, modular programs.
We show new capacity to learn modular programs, handle severe pattern shifts and remember old programs as new ones are learnt.
arXiv Detail & Related papers (2020-09-24T01:39:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.