PALo: Learning Posture-Aware Locomotion for Quadruped Robots
- URL: http://arxiv.org/abs/2503.04462v1
- Date: Thu, 06 Mar 2025 14:13:59 GMT
- Title: PALo: Learning Posture-Aware Locomotion for Quadruped Robots
- Authors: Xiangyu Miao, Jun Sun, Hang Lai, Xinpeng Di, Jiahang Cao, Yong Yu, Weinan Zhang,
- Abstract summary: We propose an end-to-end deep reinforcement learning framework for posture-aware locomotion named PALo.<n> PALo handles simultaneous linear and angular velocity tracking and real-time adjustments of body height, pitch, and roll angles.<n> PALo achieves agile posture-aware locomotion control in simulated environments and successfully transfers to real-world settings without fine-tuning.
- Score: 29.582249837902427
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the rapid development of embodied intelligence, locomotion control of quadruped robots on complex terrains has become a research hotspot. Unlike traditional locomotion control approaches focusing solely on velocity tracking, we pursue to balance the agility and robustness of quadruped robots on diverse and complex terrains. To this end, we propose an end-to-end deep reinforcement learning framework for posture-aware locomotion named PALo, which manages to handle simultaneous linear and angular velocity tracking and real-time adjustments of body height, pitch, and roll angles. In PALo, the locomotion control problem is formulated as a partially observable Markov decision process, and an asymmetric actor-critic architecture is adopted to overcome the sim-to-real challenge. Further, by incorporating customized training curricula, PALo achieves agile posture-aware locomotion control in simulated environments and successfully transfers to real-world settings without fine-tuning, allowing real-time control of the quadruped robot's locomotion and body posture across challenging terrains. Through in-depth experimental analysis, we identify the key components of PALo that contribute to its performance, further validating the effectiveness of the proposed method. The results of this study provide new possibilities for the low-level locomotion control of quadruped robots in higher dimensional command spaces and lay the foundation for future research on upper-level modules for embodied intelligence.
Related papers
- Gait in Eight: Efficient On-Robot Learning for Omnidirectional Quadruped Locomotion [13.314871831095882]
On-robot Reinforcement Learning is a promising approach to train embodiment-aware policies for legged robots.
We present a framework for efficiently learning quadruped locomotion in just 8 minutes of raw real-time training.
We demonstrate the robustness of our approach in different indoor and outdoor environments.
arXiv Detail & Related papers (2025-03-11T12:32:06Z) - Humanoid Whole-Body Locomotion on Narrow Terrain via Dynamic Balance and Reinforcement Learning [54.26816599309778]
We propose a novel whole-body locomotion algorithm based on dynamic balance and Reinforcement Learning (RL)<n> Specifically, we introduce a dynamic balance mechanism by leveraging an extended measure of Zero-Moment Point (ZMP)-driven rewards and task-driven rewards in a whole-body actor-critic framework.<n> Experiments conducted on a full-sized Unitree H1-2 robot verify the ability of our method to maintain balance on extremely narrow terrains.
arXiv Detail & Related papers (2025-02-24T14:53:45Z) - Learning Humanoid Standing-up Control across Diverse Postures [27.79222176982376]
We present HoST (Humanoid Standing-up Control), a reinforcement learning framework that learns standing-up control from scratch.<n>HoST effectively learns posture-adaptive motions by leveraging a multi-critic architecture and curriculum-based training on diverse simulated terrains.<n>Our experimental results demonstrate that the controllers achieve smooth, stable, and robust standing-up motions across a wide range of laboratory and outdoor environments.
arXiv Detail & Related papers (2025-02-12T13:10:09Z) - Learning to enhance multi-legged robot on rugged landscapes [7.956679144631909]
Multi-legged robots offer a promising solution forNavigating rugged landscapes.
Recent studies have shown that a linear controller can ensure reliable mobility on challenging terrains.
We develop a MuJoCo-based simulator tailored to this robotic platform and use the simulation to develop a reinforcement learning-based control framework.
arXiv Detail & Related papers (2024-09-14T15:53:08Z) - Dexterous Legged Locomotion in Confined 3D Spaces with Reinforcement
Learning [37.95557495560936]
We introduce a hierarchical locomotion controller that combines a classical planner tasked with planning waypoints to reach a faraway global goal location, and an RL-based policy trained to follow these waypoints by generating low-level motion commands.
In simulation, our hierarchical approach succeeds at navigating through demanding confined 3D environments, outperforming both pure end-to-end learning approaches and parameterized locomotion skills.
arXiv Detail & Related papers (2024-03-06T16:49:08Z) - Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - Barkour: Benchmarking Animal-level Agility with Quadruped Robots [70.97471756305463]
We introduce the Barkour benchmark, an obstacle course to quantify agility for legged robots.
Inspired by dog agility competitions, it consists of diverse obstacles and a time based scoring mechanism.
We present two methods for tackling the benchmark.
arXiv Detail & Related papers (2023-05-24T02:49:43Z) - Reinforcement Learning for Robust Parameterized Locomotion Control of
Bipedal Robots [121.42930679076574]
We present a model-free reinforcement learning framework for training robust locomotion policies in simulation.
domain randomization is used to encourage the policies to learn behaviors that are robust across variations in system dynamics.
We demonstrate this on versatile walking behaviors such as tracking a target walking velocity, walking height, and turning yaw.
arXiv Detail & Related papers (2021-03-26T07:14:01Z) - Learning Quadrupedal Locomotion over Challenging Terrain [68.51539602703662]
Legged locomotion can dramatically expand the operational domains of robotics.
Conventional controllers for legged locomotion are based on elaborate state machines that explicitly trigger the execution of motion primitives and reflexes.
Here we present a radically robust controller for legged locomotion in challenging natural environments.
arXiv Detail & Related papers (2020-10-21T19:11:20Z) - Learning Agile Robotic Locomotion Skills by Imitating Animals [72.36395376558984]
Reproducing the diverse and agile locomotion skills of animals has been a longstanding challenge in robotics.
We present an imitation learning system that enables legged robots to learn agile locomotion skills by imitating real-world animals.
arXiv Detail & Related papers (2020-04-02T02:56:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.