From Biological Synapses to Intelligent Robots
- URL: http://arxiv.org/abs/2202.12660v1
- Date: Fri, 25 Feb 2022 12:39:22 GMT
- Title: From Biological Synapses to Intelligent Robots
- Authors: Birgitta Dresp-Langley
- Abstract summary: Hebbian synaptic learning is discussed as a functionally relevant model for machine learning and intelligence.
The potential for adaptive learning and control without supervision is brought forward.
The insights collected here point toward the Hebbian model as a choice solution for intelligent robotics and sensor systems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This review explores biologically inspired learning as a model for
intelligent robot control and sensing technology on the basis of specific
examples. Hebbian synaptic learning is discussed as a functionally relevant
model for machine learning and intelligence, as explained on the basis of
examples from the highly plastic biological neural networks of invertebrates
and vertebrates. Its potential for adaptive learning and control without
supervision, the generation of functional complexity, and control architectures
based on self organization is brought forward. Learning without prior knowledge
based on excitatory and inhibitory neural mechanisms accounts for the process
through which survival or task relevant representations are either reinforced
or suppressed. The basic mechanisms of unsupervised biological learning drive
synaptic plasticity and adaptation for behavioral success in living brains with
different levels of complexity. The insights collected here point toward the
Hebbian model as a choice solution for intelligent robotics and sensor systems.
Keywords: Hebbian learning, synaptic plasticity, neural networks, self
organization, brain, reinforcement, sensory processing, robot control
Related papers
- Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - Brain-inspired learning in artificial neural networks: a review [5.064447369892274]
We review current brain-inspired learning representations in artificial neural networks.
We investigate the integration of more biologically plausible mechanisms, such as synaptic plasticity, to enhance these networks' capabilities.
arXiv Detail & Related papers (2023-05-18T18:34:29Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Control of synaptic plasticity via the fusion of reinforcement learning
and unsupervised learning in neural networks [0.0]
In cognitive neuroscience, it is widely accepted that synaptic plasticity plays an essential role in our amazing learning capability.
With this inspiration, a new learning rule is proposed via the fusion of reinforcement learning and unsupervised learning.
In the proposed computational model, the nonlinear optimal control theory is used to resemble the error feedback loop systems.
arXiv Detail & Related papers (2023-03-26T12:18:03Z) - World Models and Predictive Coding for Cognitive and Developmental
Robotics: Frontiers and Challenges [51.92834011423463]
We focus on the two concepts of world models and predictive coding.
In neuroscience, predictive coding proposes that the brain continuously predicts its inputs and adapts to model its own dynamics and control behavior in its environment.
arXiv Detail & Related papers (2023-01-14T06:38:14Z) - Learning body models: from humans to humanoids [2.855485723554975]
Humans and animals excel in combining information from multiple sensory modalities, controlling their complex bodies, adapting to growth, failures, or using tools.
Key foundation is an internal representation of the body that the agent - human, animal, or robot - has developed.
mechanisms of operation of body models in the brain are largely unknown and even less is known about how they are constructed from experience after birth.
arXiv Detail & Related papers (2022-11-06T07:30:01Z) - Learning to acquire novel cognitive tasks with evolution, plasticity and
meta-meta-learning [3.8073142980733]
In meta-learning, networks are trained with external algorithms to learn tasks that require acquiring, storing and exploiting unpredictable information for each new instance of the task.
Here we evolve neural networks, endowed with plastic connections, over a sizable set of simple meta-learning tasks based on a neuroscience modelling framework.
The resulting evolved network can automatically acquire a novel simple cognitive task, never seen during training, through the spontaneous operation of its evolved neural organization and plasticity structure.
arXiv Detail & Related papers (2021-12-16T03:18:01Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.