GLiDE: Generalizable Quadrupedal Locomotion in Diverse Environments with
a Centroidal Model
- URL: http://arxiv.org/abs/2104.09771v2
- Date: Thu, 22 Apr 2021 03:40:44 GMT
- Title: GLiDE: Generalizable Quadrupedal Locomotion in Diverse Environments with
a Centroidal Model
- Authors: Zhaoming Xie, Xingye Da, Buck Babich, Animesh Garg, Michiel van de
Panne
- Abstract summary: We show how model-free reinforcement learning can be effectively used with a centroidal model to generate robust control policies for quadrupedal locomotion.
We show the potential of the method by demonstrating stepping-stone locomotion, two-legged in-place balance, balance beam locomotion, and sim-to-real transfer without further adaptations.
- Score: 18.66472547798549
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Model-free reinforcement learning (RL) for legged locomotion commonly relies
on a physics simulator that can accurately predict the behaviors of every
degree of freedom of the robot. In contrast, approximate reduced-order models
are often sufficient for many model-based control strategies. In this work we
explore how RL can be effectively used with a centroidal model to generate
robust control policies for quadrupedal locomotion. Advantages over RL with a
full-order model include a simple reward structure, reduced computational
costs, and robust sim-to-real transfer. We further show the potential of the
method by demonstrating stepping-stone locomotion, two-legged in-place balance,
balance beam locomotion, and sim-to-real transfer without further adaptations.
Additional Results: https://www.pair.toronto.edu/glide-quadruped/.
Related papers
- Offline Adaptation of Quadruped Locomotion using Diffusion Models [59.882275766745295]
We present a diffusion-based approach to quadrupedal locomotion that simultaneously addresses the limitations of learning and interpolating between multiple skills.
We show that these capabilities are compatible with a multi-skill policy and can be applied with little modification and minimal compute overhead.
We verify the validity of our approach with hardware experiments on the ANYmal quadruped platform.
arXiv Detail & Related papers (2024-11-13T18:12:15Z) - Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - Combining model-predictive control and predictive reinforcement learning
for stable quadrupedal robot locomotion [0.0]
We study how this can be achieved by a combination of model-predictive and predictive reinforcement learning controllers.
In this work, we combine both control methods to address the quadrupedal robot stable gate generation problem.
arXiv Detail & Related papers (2023-07-15T09:22:37Z) - RL + Model-based Control: Using On-demand Optimal Control to Learn Versatile Legged Locomotion [16.800984476447624]
This paper presents a control framework that combines model-based optimal control and reinforcement learning.
We validate the robustness and controllability of the framework through a series of experiments.
Our framework effortlessly supports the training of control policies for robots with diverse dimensions.
arXiv Detail & Related papers (2023-05-29T01:33:55Z) - Model-Based Reinforcement Learning with Isolated Imaginations [61.67183143982074]
We propose Iso-Dream++, a model-based reinforcement learning approach.
We perform policy optimization based on the decoupled latent imaginations.
This enables long-horizon visuomotor control tasks to benefit from isolating mixed dynamics sources in the wild.
arXiv Detail & Related papers (2023-03-27T02:55:56Z) - An Adaptable Approach to Learn Realistic Legged Locomotion without
Examples [38.81854337592694]
This work proposes a generic approach for ensuring realism in locomotion by guiding the learning process with the spring-loaded inverted pendulum model as a reference.
We present experimental results showing that even in a model-free setup, the learned policies can generate realistic and energy-efficient locomotion gaits for a bipedal and a quadrupedal robot.
arXiv Detail & Related papers (2021-10-28T10:14:47Z) - Reinforcement Learning for Robust Parameterized Locomotion Control of
Bipedal Robots [121.42930679076574]
We present a model-free reinforcement learning framework for training robust locomotion policies in simulation.
domain randomization is used to encourage the policies to learn behaviors that are robust across variations in system dynamics.
We demonstrate this on versatile walking behaviors such as tracking a target walking velocity, walking height, and turning yaw.
arXiv Detail & Related papers (2021-03-26T07:14:01Z) - RLOC: Terrain-Aware Legged Locomotion using Reinforcement Learning and
Optimal Control [6.669503016190925]
We present a unified model-based and data-driven approach for quadrupedal planning and control.
We map sensory information and desired base velocity commands into footstep plans using a reinforcement learning policy.
We train and evaluate our framework on a complex quadrupedal system, ANYmal B, and demonstrate transferability to a larger and heavier robot, ANYmal C, without requiring retraining.
arXiv Detail & Related papers (2020-12-05T18:30:23Z) - Learning Quadrupedal Locomotion over Challenging Terrain [68.51539602703662]
Legged locomotion can dramatically expand the operational domains of robotics.
Conventional controllers for legged locomotion are based on elaborate state machines that explicitly trigger the execution of motion primitives and reflexes.
Here we present a radically robust controller for legged locomotion in challenging natural environments.
arXiv Detail & Related papers (2020-10-21T19:11:20Z) - First Steps: Latent-Space Control with Semantic Constraints for
Quadruped Locomotion [73.37945453998134]
Traditional approaches to quadruped control employ simplified, hand-derived models.
This significantly reduces the capability of the robot since its effective kinematic range is curtailed.
In this work, these challenges are addressed by framing quadruped control as optimisation in a structured latent space.
A deep generative model captures a statistical representation of feasible joint configurations, whilst complex dynamic and terminal constraints are expressed via high-level, semantic indicators.
We validate the feasibility of locomotion trajectories optimised using our approach both in simulation and on a real-worldmal quadruped.
arXiv Detail & Related papers (2020-07-03T07:04:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.