Incremental procedural and sensorimotor learning in cognitive humanoid
robots
- URL: http://arxiv.org/abs/2305.00597v1
- Date: Sun, 30 Apr 2023 22:51:31 GMT
- Title: Incremental procedural and sensorimotor learning in cognitive humanoid
robots
- Authors: Leonardo de Lellis Rossi, Leticia Mara Berto, Eric Rohmer, Paula Paro
Costa, Ricardo Ribeiro Gudwin, Esther Luna Colombini and Alexandre da Silva
Simoes
- Abstract summary: This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
- Score: 52.77024349608834
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The ability to automatically learn movements and behaviors of increasing
complexity is a long-term goal in autonomous systems. Indeed, this is a very
complex problem that involves understanding how knowledge is acquired and
reused by humans as well as proposing mechanisms that allow artificial agents
to reuse previous knowledge. Inspired by Jean Piaget's theory's first three
sensorimotor substages, this work presents a cognitive agent based on CONAIM
(Conscious Attention-Based Integrated Model) that can learn procedures
incrementally. Throughout the paper, we show the cognitive functions required
in each substage and how adding new functions helps address tasks previously
unsolved by the agent. Experiments were conducted with a humanoid robot in a
simulated environment modeled with the Cognitive Systems Toolkit (CST)
performing an object tracking task. The system is modeled using a single
procedural learning mechanism based on Reinforcement Learning. The increasing
agent's cognitive complexity is managed by adding new terms to the reward
function for each learning phase. Results show that this approach is capable of
solving complex tasks incrementally.
Related papers
- Stabilizing Contrastive RL: Techniques for Robotic Goal Reaching from
Offline Data [101.43350024175157]
Self-supervised learning has the potential to decrease the amount of human annotation and engineering effort required to learn control strategies.
Our work builds on prior work showing that the reinforcement learning (RL) itself can be cast as a self-supervised problem.
We demonstrate that a self-supervised RL algorithm based on contrastive learning can solve real-world, image-based robotic manipulation tasks.
arXiv Detail & Related papers (2023-06-06T01:36:56Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - NeuroCERIL: Robotic Imitation Learning via Hierarchical Cause-Effect
Reasoning in Programmable Attractor Neural Networks [2.0646127669654826]
We present NeuroCERIL, a brain-inspired neurocognitive architecture that uses a novel hypothetico-deductive reasoning procedure.
We show that NeuroCERIL can learn various procedural skills in a simulated robotic imitation learning domain.
We conclude that NeuroCERIL is a viable neural model of human-like imitation learning.
arXiv Detail & Related papers (2022-11-11T19:56:11Z) - Intelligent problem-solving as integrated hierarchical reinforcement
learning [11.284287026711125]
Development of complex problem-solving behaviour in biological agents depends on hierarchical cognitive mechanisms.
We propose steps to integrate biologically inspired hierarchical mechanisms to enable advanced problem-solving skills in artificial agents.
We expect our results to guide the development of more sophisticated cognitively inspired hierarchical machine learning architectures.
arXiv Detail & Related papers (2022-08-18T09:28:03Z) - Physics-Guided Hierarchical Reward Mechanism for Learning-Based Robotic
Grasping [10.424363966870775]
We develop a Physics-Guided Deep Reinforcement Learning with a Hierarchical Reward Mechanism to improve learning efficiency and generalizability for learning-based autonomous grasping.
Our method is validated in robotic grasping tasks with a 3-finger MICO robot arm.
arXiv Detail & Related papers (2022-05-26T18:01:56Z) - From Biological Synapses to Intelligent Robots [0.0]
Hebbian synaptic learning is discussed as a functionally relevant model for machine learning and intelligence.
The potential for adaptive learning and control without supervision is brought forward.
The insights collected here point toward the Hebbian model as a choice solution for intelligent robotics and sensor systems.
arXiv Detail & Related papers (2022-02-25T12:39:22Z) - BC-Z: Zero-Shot Task Generalization with Robotic Imitation Learning [108.41464483878683]
We study the problem of enabling a vision-based robotic manipulation system to generalize to novel tasks.
We develop an interactive and flexible imitation learning system that can learn from both demonstrations and interventions.
When scaling data collection on a real robot to more than 100 distinct tasks, we find that this system can perform 24 unseen manipulation tasks with an average success rate of 44%.
arXiv Detail & Related papers (2022-02-04T07:30:48Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - Hierarchical principles of embodied reinforcement learning: A review [11.613306236691427]
We show that all important cognitive mechanisms have been implemented independently in isolated computational architectures.
We expect our results to guide the development of more sophisticated cognitively inspired hierarchical methods.
arXiv Detail & Related papers (2020-12-18T10:19:38Z) - Hierarchical Affordance Discovery using Intrinsic Motivation [69.9674326582747]
We propose an algorithm using intrinsic motivation to guide the learning of affordances for a mobile robot.
This algorithm is capable to autonomously discover, learn and adapt interrelated affordances without pre-programmed actions.
Once learned, these affordances may be used by the algorithm to plan sequences of actions in order to perform tasks of various difficulties.
arXiv Detail & Related papers (2020-09-23T07:18:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.