Learning Locomotion Skills in Evolvable Robots
- URL: http://arxiv.org/abs/2010.09531v1
- Date: Mon, 19 Oct 2020 14:01:50 GMT
- Title: Learning Locomotion Skills in Evolvable Robots
- Authors: Gongjin Lan, Maarten van Hooft, Matteo De Carlo, Jakub M. Tomczak,
A.E. Eiben
- Abstract summary: We introduce a controller architecture and a generic learning method to allow a modular robot with an arbitrary shape to learn to walk towards a target and follow this target if it moves.
Our approach is validated on three robots, a spider, a gecko, and their offspring, in three real-world scenarios.
- Score: 10.167123492952694
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The challenge of robotic reproduction -- making of new robots by recombining
two existing ones -- has been recently cracked and physically evolving robot
systems have come within reach. Here we address the next big hurdle: producing
an adequate brain for a newborn robot. In particular, we address the task of
targeted locomotion which is arguably a fundamental skill in any practical
implementation. We introduce a controller architecture and a generic learning
method to allow a modular robot with an arbitrary shape to learn to walk
towards a target and follow this target if it moves. Our approach is validated
on three robots, a spider, a gecko, and their offspring, in three real-world
scenarios.
Related papers
- HumanoidBench: Simulated Humanoid Benchmark for Whole-Body Locomotion and Manipulation [50.616995671367704]
We present a high-dimensional, simulated robot learning benchmark, HumanoidBench, featuring a humanoid robot equipped with dexterous hands.
Our findings reveal that state-of-the-art reinforcement learning algorithms struggle with most tasks, whereas a hierarchical learning approach achieves superior performance when supported by robust low-level policies.
arXiv Detail & Related papers (2024-03-15T17:45:44Z) - Correspondence learning between morphologically different robots via
task demonstrations [2.1374208474242815]
We propose a method to learn correspondences among two or more robots that may have different morphologies.
A fixed-based manipulator robot with joint control and a differential drive mobile robot can be addressed within the proposed framework.
We provide a proof-of-the-concept realization of correspondence learning between a real manipulator robot and a simulated mobile robot.
arXiv Detail & Related papers (2023-10-20T12:42:06Z) - HERD: Continuous Human-to-Robot Evolution for Learning from Human
Demonstration [57.045140028275036]
We show that manipulation skills can be transferred from a human to a robot through the use of micro-evolutionary reinforcement learning.
We propose an algorithm for multi-dimensional evolution path searching that allows joint optimization of both the robot evolution path and the policy.
arXiv Detail & Related papers (2022-12-08T15:56:13Z) - GenLoco: Generalized Locomotion Controllers for Quadrupedal Robots [87.32145104894754]
We introduce a framework for training generalized locomotion (GenLoco) controllers for quadrupedal robots.
Our framework synthesizes general-purpose locomotion controllers that can be deployed on a large variety of quadrupedal robots.
We show that our models acquire more general control strategies that can be directly transferred to novel simulated and real-world robots.
arXiv Detail & Related papers (2022-09-12T15:14:32Z) - REvolveR: Continuous Evolutionary Models for Robot-to-robot Policy
Transfer [57.045140028275036]
We consider the problem of transferring a policy across two different robots with significantly different parameters such as kinematics and morphology.
Existing approaches that train a new policy by matching the action or state transition distribution, including imitation learning methods, fail due to optimal action and/or state distribution being mismatched in different robots.
We propose a novel method named $REvolveR$ of using continuous evolutionary models for robotic policy transfer implemented in a physics simulator.
arXiv Detail & Related papers (2022-02-10T18:50:25Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z) - Know Thyself: Transferable Visuomotor Control Through Robot-Awareness [22.405839096833937]
Training visuomotor robot controllers from scratch on a new robot typically requires generating large amounts of robot-specific data.
We propose a "robot-aware" solution paradigm that exploits readily available robot "self-knowledge"
Our experiments on tabletop manipulation tasks in simulation and on real robots demonstrate that these plug-in improvements dramatically boost the transferability of visuomotor controllers.
arXiv Detail & Related papers (2021-07-19T17:56:04Z) - Adaptation of Quadruped Robot Locomotion with Meta-Learning [64.71260357476602]
We demonstrate that meta-reinforcement learning can be used to successfully train a robot capable to solve a wide range of locomotion tasks.
The performance of the meta-trained robot is similar to that of a robot that is trained on a single task.
arXiv Detail & Related papers (2021-07-08T10:37:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.