BeamDojo: Learning Agile Humanoid Locomotion on Sparse Footholds
- URL: http://arxiv.org/abs/2502.10363v3
- Date: Sun, 27 Apr 2025 13:49:42 GMT
- Title: BeamDojo: Learning Agile Humanoid Locomotion on Sparse Footholds
- Authors: Huayi Wang, Zirui Wang, Junli Ren, Qingwei Ben, Tao Huang, Weinan Zhang, Jiangmiao Pang,
- Abstract summary: Existing learning-based approaches often struggle on complex terrains due to sparse foothold rewards and inefficient learning processes.<n>We introduce BeamDojo, a reinforcement learning framework designed for enabling agile humanoid locomotion on sparse footholds.<n>We show that BeamDojo achieves efficient learning in simulation and enables agile locomotion with precise foot placement on sparse footholds in the real world.
- Score: 35.62230804783507
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traversing risky terrains with sparse footholds poses a significant challenge for humanoid robots, requiring precise foot placements and stable locomotion. Existing learning-based approaches often struggle on such complex terrains due to sparse foothold rewards and inefficient learning processes. To address these challenges, we introduce BeamDojo, a reinforcement learning (RL) framework designed for enabling agile humanoid locomotion on sparse footholds. BeamDojo begins by introducing a sampling-based foothold reward tailored for polygonal feet, along with a double critic to balancing the learning process between dense locomotion rewards and sparse foothold rewards. To encourage sufficient trial-and-error exploration, BeamDojo incorporates a two-stage RL approach: the first stage relaxes the terrain dynamics by training the humanoid on flat terrain while providing it with task-terrain perceptive observations, and the second stage fine-tunes the policy on the actual task terrain. Moreover, we implement a onboard LiDAR-based elevation map to enable real-world deployment. Extensive simulation and real-world experiments demonstrate that BeamDojo achieves efficient learning in simulation and enables agile locomotion with precise foot placement on sparse footholds in the real world, maintaining a high success rate even under significant external disturbances.
Related papers
- A General Infrastructure and Workflow for Quadrotor Deep Reinforcement Learning and Reality Deployment [48.90852123901697]
We propose a platform that enables seamless transfer of end-to-end deep reinforcement learning (DRL) policies to quadrotors.
Our platform provides rich types of environments including hovering, dynamic obstacle avoidance, trajectory tracking, balloon hitting, and planning in unknown environments.
arXiv Detail & Related papers (2025-04-21T14:25:23Z) - Learning Humanoid Standing-up Control across Diverse Postures [27.79222176982376]
We present HoST (Humanoid Standing-up Control), a reinforcement learning framework that learns standing-up control from scratch.<n>HoST effectively learns posture-adaptive motions by leveraging a multi-critic architecture and curriculum-based training on diverse simulated terrains.<n>Our experimental results demonstrate that the controllers achieve smooth, stable, and robust standing-up motions across a wide range of laboratory and outdoor environments.
arXiv Detail & Related papers (2025-02-12T13:10:09Z) - Learning Humanoid Locomotion over Challenging Terrain [84.35038297708485]
We present a learning-based approach for blind humanoid locomotion capable of traversing challenging natural and man-made terrains.
Our model is first pre-trained on a dataset of flat-ground trajectories with sequence modeling, and then fine-tuned on uneven terrain using reinforcement learning.
We evaluate our model on a real humanoid robot across a variety of terrains, including rough, deformable, and sloped surfaces.
arXiv Detail & Related papers (2024-10-04T17:57:09Z) - Learning Bipedal Walking for Humanoid Robots in Challenging Environments with Obstacle Avoidance [0.3481985817302898]
Deep reinforcement learning has seen successful implementations on humanoid robots to achieve dynamic walking.
In this paper, we aim to achieve bipedal locomotion in an environment where obstacles are present using a policy-based reinforcement learning.
arXiv Detail & Related papers (2024-09-25T07:02:04Z) - Dexterous Legged Locomotion in Confined 3D Spaces with Reinforcement
Learning [37.95557495560936]
We introduce a hierarchical locomotion controller that combines a classical planner tasked with planning waypoints to reach a faraway global goal location, and an RL-based policy trained to follow these waypoints by generating low-level motion commands.
In simulation, our hierarchical approach succeeds at navigating through demanding confined 3D environments, outperforming both pure end-to-end learning approaches and parameterized locomotion skills.
arXiv Detail & Related papers (2024-03-06T16:49:08Z) - Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - Learning Robust, Agile, Natural Legged Locomotion Skills in the Wild [17.336553501547282]
We propose a new framework for learning robust, agile and natural legged locomotion skills over challenging terrain.
Empirical results on both simulation and real world of a quadruped robot demonstrate that our proposed algorithm enables robustly traversing challenging terrains.
arXiv Detail & Related papers (2023-04-21T11:09:23Z) - Legged Locomotion in Challenging Terrains using Egocentric Vision [70.37554680771322]
We present the first end-to-end locomotion system capable of traversing stairs, curbs, stepping stones, and gaps.
We show this result on a medium-sized quadruped robot using a single front-facing depth camera.
arXiv Detail & Related papers (2022-11-14T18:59:58Z) - A Walk in the Park: Learning to Walk in 20 Minutes With Model-Free
Reinforcement Learning [86.06110576808824]
Deep reinforcement learning is a promising approach to learning policies in uncontrolled environments.
Recent advancements in machine learning algorithms and libraries combined with a carefully tuned robot controller lead to learning quadruped in only 20 minutes in the real world.
arXiv Detail & Related papers (2022-08-16T17:37:36Z) - Learning Coordinated Terrain-Adaptive Locomotion by Imitating a
Centroidal Dynamics Planner [27.476911967228926]
Reinforcement Learning (RL) can learn dynamic reactive controllers but require carefully tuned shaping rewards to produce good gaits.
Imitation learning circumvents this problem and has been used with motion capture data to extract quadruped gaits for flat terrains.
We show that the learned policies transfer to unseen terrains and can be fine-tuned to dynamically traverse challenging terrains.
arXiv Detail & Related papers (2021-10-30T14:24:39Z) - Learning to Jump from Pixels [23.17535989519855]
We present Depth-based Impulse Control (DIC), a method for synthesizing highly agile visually-guided behaviors.
DIC affords the flexibility of model-free learning but regularizes behavior through explicit model-based optimization of ground reaction forces.
We evaluate the proposed method both in simulation and in the real world.
arXiv Detail & Related papers (2021-10-28T17:53:06Z) - Learning High-Speed Flight in the Wild [101.33104268902208]
We propose an end-to-end approach that can autonomously fly quadrotors through complex natural and man-made environments at high speeds.
The key principle is to directly map noisy sensory observations to collision-free trajectories in a receding-horizon fashion.
By simulating realistic sensor noise, our approach achieves zero-shot transfer from simulation to challenging real-world environments.
arXiv Detail & Related papers (2021-10-11T09:43:11Z) - Learning Perceptual Locomotion on Uneven Terrains using Sparse Visual
Observations [75.60524561611008]
This work aims to exploit the use of sparse visual observations to achieve perceptual locomotion over a range of commonly seen bumps, ramps, and stairs in human-centred environments.
We first formulate the selection of minimal visual input that can represent the uneven surfaces of interest, and propose a learning framework that integrates such exteroceptive and proprioceptive data.
We validate the learned policy in tasks that require omnidirectional walking over flat ground and forward locomotion over terrains with obstacles, showing a high success rate.
arXiv Detail & Related papers (2021-09-28T20:25:10Z) - Learning Quadrupedal Locomotion over Challenging Terrain [68.51539602703662]
Legged locomotion can dramatically expand the operational domains of robotics.
Conventional controllers for legged locomotion are based on elaborate state machines that explicitly trigger the execution of motion primitives and reflexes.
Here we present a radically robust controller for legged locomotion in challenging natural environments.
arXiv Detail & Related papers (2020-10-21T19:11:20Z) - ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for
Mobile Manipulation [99.2543521972137]
ReLMoGen is a framework that combines a learned policy to predict subgoals and a motion generator to plan and execute the motion needed to reach these subgoals.
Our method is benchmarked on a diverse set of seven robotics tasks in photo-realistic simulation environments.
ReLMoGen shows outstanding transferability between different motion generators at test time, indicating a great potential to transfer to real robots.
arXiv Detail & Related papers (2020-08-18T08:05:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.