Learning Directed Locomotion in Modular Robots with Evolvable
Morphologies
- URL: http://arxiv.org/abs/2001.07804v1
- Date: Tue, 21 Jan 2020 23:01:00 GMT
- Title: Learning Directed Locomotion in Modular Robots with Evolvable
Morphologies
- Authors: Gongjin Lan, Matteo De Carlo, Fuda van Diggelen, Jakub M. Tomczak,
Diederik M. Roijers, and A.E. Eiben
- Abstract summary: This study is based on our interest in evolutionary robot systems where both morphologies and controllers evolve.
In such a system, newborn robots have to learn to control their own body that is a random combination of the bodies of the parents.
- Score: 11.006321791711175
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We generalize the well-studied problem of gait learning in modular robots in
two dimensions. Firstly, we address locomotion in a given target direction that
goes beyond learning a typical undirected gait. Secondly, rather than studying
one fixed robot morphology we consider a test suite of different modular
robots. This study is based on our interest in evolutionary robot systems where
both morphologies and controllers evolve. In such a system, newborn robots have
to learn to control their own body that is a random combination of the bodies
of the parents. We apply and compare two learning algorithms, Bayesian
optimization and HyperNEAT. The results of the experiments in simulation show
that both methods successfully learn good controllers, but Bayesian
optimization is more effective and efficient. We validate the best learned
controllers by constructing three robots from the test suite in the real world
and observe their fitness and actual trajectories. The obtained results
indicate a reality gap that depends on the controllers and the shape of the
robots, but overall the trajectories are adequate and follow the target
directions successfully.
Related papers
- DittoGym: Learning to Control Soft Shape-Shifting Robots [30.287452037945542]
We explore the novel reconfigurable robots, defined as robots that can change their morphology within their lifetime.
We formalize control of reconfigurable soft robots as a high-dimensional reinforcement learning (RL) problem.
We introduce DittoGym, a comprehensive RL benchmark for reconfigurable soft robots that require fine-grained morphology changes.
arXiv Detail & Related papers (2024-01-24T05:03:05Z) - A comparison of controller architectures and learning mechanisms for
arbitrary robot morphologies [2.884244918665901]
What combination of a robot controller and a learning method should be used, if the morphology of the learning robot is not known in advance?
We perform an experimental comparison of three controller-and-learner combinations.
We compare their efficacy, efficiency, and robustness.
arXiv Detail & Related papers (2023-09-25T07:11:43Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Universal Morphology Control via Contextual Modulation [52.742056836818136]
Learning a universal policy across different robot morphologies can significantly improve learning efficiency and generalization in continuous control.
Existing methods utilize graph neural networks or transformers to handle heterogeneous state and action spaces across different morphologies.
We propose a hierarchical architecture to better model this dependency via contextual modulation.
arXiv Detail & Related papers (2023-02-22T00:04:12Z) - GenLoco: Generalized Locomotion Controllers for Quadrupedal Robots [87.32145104894754]
We introduce a framework for training generalized locomotion (GenLoco) controllers for quadrupedal robots.
Our framework synthesizes general-purpose locomotion controllers that can be deployed on a large variety of quadrupedal robots.
We show that our models acquire more general control strategies that can be directly transferred to novel simulated and real-world robots.
arXiv Detail & Related papers (2022-09-12T15:14:32Z) - DayDreamer: World Models for Physical Robot Learning [142.11031132529524]
Deep reinforcement learning is a common approach to robot learning but requires a large amount of trial and error to learn.
Many advances in robot learning rely on simulators.
In this paper, we apply Dreamer to 4 robots to learn online and directly in the real world, without simulators.
arXiv Detail & Related papers (2022-06-28T17:44:48Z) - REvolveR: Continuous Evolutionary Models for Robot-to-robot Policy
Transfer [57.045140028275036]
We consider the problem of transferring a policy across two different robots with significantly different parameters such as kinematics and morphology.
Existing approaches that train a new policy by matching the action or state transition distribution, including imitation learning methods, fail due to optimal action and/or state distribution being mismatched in different robots.
We propose a novel method named $REvolveR$ of using continuous evolutionary models for robotic policy transfer implemented in a physics simulator.
arXiv Detail & Related papers (2022-02-10T18:50:25Z) - Evolution Gym: A Large-Scale Benchmark for Evolving Soft Robots [29.02903745467536]
We propose Evolution Gym, the first large-scale benchmark for co-optimizing the design and control of soft robots.
Our benchmark environments span a wide range of tasks, including locomotion on various types of terrains and manipulation.
We develop several robot co-evolution algorithms by combining state-of-the-art design optimization methods and deep reinforcement learning techniques.
arXiv Detail & Related papers (2022-01-24T18:39:22Z) - Learning Locomotion Skills in Evolvable Robots [10.167123492952694]
We introduce a controller architecture and a generic learning method to allow a modular robot with an arbitrary shape to learn to walk towards a target and follow this target if it moves.
Our approach is validated on three robots, a spider, a gecko, and their offspring, in three real-world scenarios.
arXiv Detail & Related papers (2020-10-19T14:01:50Z) - Learning Agile Robotic Locomotion Skills by Imitating Animals [72.36395376558984]
Reproducing the diverse and agile locomotion skills of animals has been a longstanding challenge in robotics.
We present an imitation learning system that enables legged robots to learn agile locomotion skills by imitating real-world animals.
arXiv Detail & Related papers (2020-04-02T02:56:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.