Gaitor: Learning a Unified Representation Across Gaits for Real-World Quadruped Locomotion
- URL: http://arxiv.org/abs/2405.19452v2
- Date: Wed, 09 Oct 2024 14:27:51 GMT
- Title: Gaitor: Learning a Unified Representation Across Gaits for Real-World Quadruped Locomotion
- Authors: Alexander L. Mitchell, Wolfgang Merkt, Aristotelis Papatheodorou, Ioannis Havoutis, Ingmar Posner,
- Abstract summary: We present Gaitor, which learns a disentangled and 2D representation across locomotion gaits.
Gaitor's latent space is readily interpretable and we discover that during gait transitions, novel unseen gaits emerge.
We evaluate Gaitor in both simulation and the real world on the ANYmal C platform.
- Score: 61.01039626207952
- License:
- Abstract: The current state-of-the-art in quadruped locomotion is able to produce a variety of complex motions. These methods either rely on switching between a discrete set of skills or learn a distribution across gaits using complex black-box models. Alternatively, we present Gaitor, which learns a disentangled and 2D representation across locomotion gaits. This learnt representation forms a planning space for closed-loop control delivering continuous gait transitions and perceptive terrain traversal. Gaitor's latent space is readily interpretable and we discover that during gait transitions, novel unseen gaits emerge. The latent space is disentangled with respect to footswing heights and lengths. This means that these gait characteristics can be varied independently in the 2D latent representation. Together with a simple terrain encoding and a learnt planner operating in the latent space, Gaitor can take motion commands including desired gait type and swing characteristics all while reacting to uneven terrain. We evaluate Gaitor in both simulation and the real world on the ANYmal C platform. To the best of our knowledge, this is the first work learning a unified and interpretable latent space for multiple gaits, resulting in continuous blending between different locomotion modes on a real quadruped robot. An overview of the methods and results in this paper is found at https://youtu.be/eVFQbRyilCA.
Related papers
- Learning Humanoid Locomotion over Challenging Terrain [84.35038297708485]
We present a learning-based approach for blind humanoid locomotion capable of traversing challenging natural and man-made terrains.
Our model is first pre-trained on a dataset of flat-ground trajectories with sequence modeling, and then fine-tuned on uneven terrain using reinforcement learning.
We evaluate our model on a real humanoid robot across a variety of terrains, including rough, deformable, and sloped surfaces.
arXiv Detail & Related papers (2024-10-04T17:57:09Z) - Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - Universal Humanoid Motion Representations for Physics-Based Control [71.46142106079292]
We present a universal motion representation that encompasses a comprehensive range of motor skills for physics-based humanoid control.
We first learn a motion imitator that can imitate all of human motion from a large, unstructured motion dataset.
We then create our motion representation by distilling skills directly from the imitator.
arXiv Detail & Related papers (2023-10-06T20:48:43Z) - Legs as Manipulator: Pushing Quadrupedal Agility Beyond Locomotion [34.33972863987201]
We train quadruped robots to use the front legs to climb walls, press buttons, and perform object interaction in the real world.
These skills are trained in simulation using curriculum and transferred to the real world using our proposed sim2real variant.
We evaluate our method in both simulation and real-world showing successful executions of both short as well as long-range tasks.
arXiv Detail & Related papers (2023-03-20T17:59:58Z) - Legged Locomotion in Challenging Terrains using Egocentric Vision [70.37554680771322]
We present the first end-to-end locomotion system capable of traversing stairs, curbs, stepping stones, and gaps.
We show this result on a medium-sized quadruped robot using a single front-facing depth camera.
arXiv Detail & Related papers (2022-11-14T18:59:58Z) - VAE-Loco: Versatile Quadruped Locomotion by Learning a Disentangled Gait
Representation [78.92147339883137]
We show that it is pivotal in increasing controller robustness by learning a latent space capturing the key stance phases constituting a particular gait.
We demonstrate that specific properties of the drive signal map directly to gait parameters such as cadence, footstep height and full stance duration.
The use of a generative model facilitates the detection and mitigation of disturbances to provide a versatile and robust planning framework.
arXiv Detail & Related papers (2022-05-02T19:49:53Z) - Learning Free Gait Transition for Quadruped Robots via Phase-Guided
Controller [4.110347671351065]
We present a novel framework for training a simple control policy for a quadruped robot to locomote in various gaits.
The Black Panther robot, a medium-dog-sized quadruped robot, can perform all learned motor skills while following the velocity commands smoothly and robustly in natural environment.
arXiv Detail & Related papers (2022-01-01T15:15:42Z) - Learning Quadruped Locomotion Policies using Logical Rules [2.008081703108095]
We aim to enable easy gait specification and efficient policy learning for quadruped robots.
Our approach is called RM-based Locomotion Learning(RMLL), and supports adjusting gait frequency at execution time.
We demonstrate these learned policies with a real quadruped robot.
arXiv Detail & Related papers (2021-07-23T00:37:32Z) - Quadruped Locomotion on Non-Rigid Terrain using Reinforcement Learning [10.729374293332281]
We present a novel reinforcement learning framework for learning locomotion on non-rigid dynamic terrains.
A trained robot with 55cm base length can walk on terrain that can sink up to 5cm.
We show the effectiveness of our method by training the robot with various terrain conditions.
arXiv Detail & Related papers (2021-07-07T00:34:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.