GenLoco: Generalized Locomotion Controllers for Quadrupedal Robots
- URL: http://arxiv.org/abs/2209.05309v1
- Date: Mon, 12 Sep 2022 15:14:32 GMT
- Title: GenLoco: Generalized Locomotion Controllers for Quadrupedal Robots
- Authors: Gilbert Feng, Hongbo Zhang, Zhongyu Li, Xue Bin Peng, Bhuvan
Basireddy, Linzhu Yue, Zhitao Song, Lizhi Yang, Yunhui Liu, Koushil Sreenath,
Sergey Levine
- Abstract summary: We introduce a framework for training generalized locomotion (GenLoco) controllers for quadrupedal robots.
Our framework synthesizes general-purpose locomotion controllers that can be deployed on a large variety of quadrupedal robots.
We show that our models acquire more general control strategies that can be directly transferred to novel simulated and real-world robots.
- Score: 87.32145104894754
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent years have seen a surge in commercially-available and affordable
quadrupedal robots, with many of these platforms being actively used in
research and industry. As the availability of legged robots grows, so does the
need for controllers that enable these robots to perform useful skills.
However, most learning-based frameworks for controller development focus on
training robot-specific controllers, a process that needs to be repeated for
every new robot. In this work, we introduce a framework for training
generalized locomotion (GenLoco) controllers for quadrupedal robots. Our
framework synthesizes general-purpose locomotion controllers that can be
deployed on a large variety of quadrupedal robots with similar morphologies. We
present a simple but effective morphology randomization method that
procedurally generates a diverse set of simulated robots for training. We show
that by training a controller on this large set of simulated robots, our models
acquire more general control strategies that can be directly transferred to
novel simulated and real-world robots with diverse morphologies, which were not
observed during training.
Related papers
- Scaling Cross-Embodied Learning: One Policy for Manipulation, Navigation, Locomotion and Aviation [49.03165169369552]
By training a single policy across many different kinds of robots, a robot learning method can leverage much broader and more diverse datasets.
We propose CrossFormer, a scalable and flexible transformer-based policy that can consume data from any embodiment.
We demonstrate that the same network weights can control vastly different robots, including single and dual arm manipulation systems, wheeled robots, quadcopters, and quadrupeds.
arXiv Detail & Related papers (2024-08-21T17:57:51Z) - Unifying 3D Representation and Control of Diverse Robots with a Single Camera [48.279199537720714]
We introduce Neural Jacobian Fields, an architecture that autonomously learns to model and control robots from vision alone.
Our approach achieves accurate closed-loop control and recovers the causal dynamic structure of each robot.
arXiv Detail & Related papers (2024-07-11T17:55:49Z) - Learning Visual Quadrupedal Loco-Manipulation from Demonstrations [36.1894630015056]
We aim to empower a quadruped robot to execute real-world manipulation tasks using only its legs.
We decompose the loco-manipulation process into a low-level reinforcement learning (RL)-based controller and a high-level Behavior Cloning (BC)-based planner.
Our approach is validated through simulations and real-world experiments, demonstrating the robot's ability to perform tasks that demand mobility and high precision.
arXiv Detail & Related papers (2024-03-29T17:59:05Z) - Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - MetaMorph: Learning Universal Controllers with Transformers [45.478223199658785]
In robotics we primarily train a single robot for a single task.
modular robot systems now allow for the flexible combination of general-purpose building blocks into task optimized morphologies.
We propose MetaMorph, a Transformer based approach to learn a universal controller over a modular robot design space.
arXiv Detail & Related papers (2022-03-22T17:58:31Z) - REvolveR: Continuous Evolutionary Models for Robot-to-robot Policy
Transfer [57.045140028275036]
We consider the problem of transferring a policy across two different robots with significantly different parameters such as kinematics and morphology.
Existing approaches that train a new policy by matching the action or state transition distribution, including imitation learning methods, fail due to optimal action and/or state distribution being mismatched in different robots.
We propose a novel method named $REvolveR$ of using continuous evolutionary models for robotic policy transfer implemented in a physics simulator.
arXiv Detail & Related papers (2022-02-10T18:50:25Z) - Know Thyself: Transferable Visuomotor Control Through Robot-Awareness [22.405839096833937]
Training visuomotor robot controllers from scratch on a new robot typically requires generating large amounts of robot-specific data.
We propose a "robot-aware" solution paradigm that exploits readily available robot "self-knowledge"
Our experiments on tabletop manipulation tasks in simulation and on real robots demonstrate that these plug-in improvements dramatically boost the transferability of visuomotor controllers.
arXiv Detail & Related papers (2021-07-19T17:56:04Z) - Learning Locomotion Skills in Evolvable Robots [10.167123492952694]
We introduce a controller architecture and a generic learning method to allow a modular robot with an arbitrary shape to learn to walk towards a target and follow this target if it moves.
Our approach is validated on three robots, a spider, a gecko, and their offspring, in three real-world scenarios.
arXiv Detail & Related papers (2020-10-19T14:01:50Z) - Learning Agile Robotic Locomotion Skills by Imitating Animals [72.36395376558984]
Reproducing the diverse and agile locomotion skills of animals has been a longstanding challenge in robotics.
We present an imitation learning system that enables legged robots to learn agile locomotion skills by imitating real-world animals.
arXiv Detail & Related papers (2020-04-02T02:56:16Z) - Learning Directed Locomotion in Modular Robots with Evolvable
Morphologies [11.006321791711175]
This study is based on our interest in evolutionary robot systems where both morphologies and controllers evolve.
In such a system, newborn robots have to learn to control their own body that is a random combination of the bodies of the parents.
arXiv Detail & Related papers (2020-01-21T23:01:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.