Learning to Autonomously Reach Objects with NICO and Grow-When-Required
Networks
- URL: http://arxiv.org/abs/2210.07851v2
- Date: Mon, 17 Oct 2022 07:19:35 GMT
- Title: Learning to Autonomously Reach Objects with NICO and Grow-When-Required
Networks
- Authors: Nima Rahrakhshan, Matthias Kerzel, Philipp Allgeuer, Nicolas Duczek,
Stefan Wermter
- Abstract summary: A developmental robotics approach is used to learn visuomotor coordination on the NICO platform for the task of object reaching.
Multiple Grow-When-Required (GWR) networks are used to learn increasingly more complex motoric behaviors.
We show that the humanoid robot NICO is able to reach objects with a 76% success rate.
- Score: 12.106301681662655
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The act of reaching for an object is a fundamental yet complex skill for a
robotic agent, requiring a high degree of visuomotor control and coordination.
In consideration of dynamic environments, a robot capable of autonomously
adapting to novel situations is desired. In this paper, a developmental
robotics approach is used to autonomously learn visuomotor coordination on the
NICO (Neuro-Inspired COmpanion) platform, for the task of object reaching. The
robot interacts with its environment and learns associations between motor
commands and temporally correlated sensory perceptions based on Hebbian
learning. Multiple Grow-When-Required (GWR) networks are used to learn
increasingly more complex motoric behaviors, by first learning how to direct
the gaze towards a visual stimulus, followed by learning motor control of the
arm, and finally learning how to reach for an object using eye-hand
coordination. We demonstrate that the model is able to deal with an unforeseen
mechanical change in the NICO's body, showing the adaptability of the proposed
approach. In evaluations of our approach, we show that the humanoid robot NICO
is able to reach objects with a 76% success rate.
Related papers
- Commonsense Reasoning for Legged Robot Adaptation with Vision-Language Models [81.55156507635286]
Legged robots are physically capable of navigating a diverse variety of environments and overcoming a wide range of obstructions.
Current learning methods often struggle with generalization to the long tail of unexpected situations without heavy human supervision.
We propose a system, VLM-Predictive Control (VLM-PC), combining two key components that we find to be crucial for eliciting on-the-fly, adaptive behavior selection.
arXiv Detail & Related papers (2024-07-02T21:00:30Z) - Deep Dive into Model-free Reinforcement Learning for Biological and Robotic Systems: Theory and Practice [17.598549532513122]
We present a concise exposition of the mathematical and algorithmic aspects of model-free reinforcement learning.
We use textitactor-critic methods as a tool for investigating the feedback control underlying animal and robotic behavior.
arXiv Detail & Related papers (2024-05-19T05:58:44Z) - FollowMe: a Robust Person Following Framework Based on Re-Identification
and Gestures [12.850149165791551]
Human-robot interaction (HRI) has become a crucial enabler in houses and industries for facilitating operational flexibility.
We developed a unified perception and navigation framework, which enables the robot to identify and follow a target person.
The Re-ID module can autonomously learn the features of a target person and use the acquired knowledge to visually re-identify the target.
arXiv Detail & Related papers (2023-11-21T20:59:27Z) - Robot Skill Generalization via Keypoint Integrated Soft Actor-Critic
Gaussian Mixture Models [21.13906762261418]
A long-standing challenge for a robotic manipulation system is adapting and generalizing its acquired motor skills to unseen environments.
We tackle this challenge employing hybrid skill models that integrate imitation and reinforcement paradigms.
We show that our method enables a robot to gain a significant zero-shot generalization to novel environments and to refine skills in the target environments faster than learning from scratch.
arXiv Detail & Related papers (2023-10-23T16:03:23Z) - Hierarchical generative modelling for autonomous robots [8.023920215148486]
We show how a humanoid robot can autonomously complete a complex task that requires a holistic use of locomotion, manipulation, and grasping.
Specifically, we demonstrate the ability of a humanoid robot that can retrieve and transport a box, open and walk through a door to reach the destination, approach and kick a football, while showing robust performance in presence of body damage and ground irregularities.
arXiv Detail & Related papers (2023-08-15T13:51:03Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Synthesis and Execution of Communicative Robotic Movements with
Generative Adversarial Networks [59.098560311521034]
We focus on how to transfer on two different robotic platforms the same kinematics modulation that humans adopt when manipulating delicate objects.
We choose to modulate the velocity profile adopted by the robots' end-effector, inspired by what humans do when transporting objects with different characteristics.
We exploit a novel Generative Adversarial Network architecture, trained with human kinematics examples, to generalize over them and generate new and meaningful velocity profiles.
arXiv Detail & Related papers (2022-03-29T15:03:05Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - COCOI: Contact-aware Online Context Inference for Generalizable
Non-planar Pushing [87.7257446869134]
General contact-rich manipulation problems are long-standing challenges in robotics.
Deep reinforcement learning has shown great potential in solving robot manipulation tasks.
We propose COCOI, a deep RL method that encodes a context embedding of dynamics properties online.
arXiv Detail & Related papers (2020-11-23T08:20:21Z) - Robot Perception enables Complex Navigation Behavior via Self-Supervised
Learning [23.54696982881734]
We propose an approach to unify successful robot perception systems for active target-driven navigation tasks via reinforcement learning (RL)
Our method temporally incorporates compact motion and visual perception data, directly obtained using self-supervision from a single image sequence.
We demonstrate our approach on two real-world driving dataset, KITTI and Oxford RobotCar, using the new interactive CityLearn framework.
arXiv Detail & Related papers (2020-06-16T07:45:47Z) - SAPIEN: A SimulAted Part-based Interactive ENvironment [77.4739790629284]
SAPIEN is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects.
We evaluate state-of-the-art vision algorithms for part detection and motion attribute recognition as well as demonstrate robotic interaction tasks.
arXiv Detail & Related papers (2020-03-19T00:11:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.