Deep Whole-body Parkour
- URL: http://arxiv.org/abs/2601.07701v1
- Date: Mon, 12 Jan 2026 16:33:16 GMT
- Title: Deep Whole-body Parkour
- Authors: Ziwen Zhuang, Shaoting Zhu, Mengjie Zhao, Hang Zhao,
- Abstract summary: We present a framework where exteroceptive sensing is integrated into whole-body motion tracking.<n>We demonstrate the non-trivial benefit of integrating perception into the control loop.<n>Results show that this framework enables robust, highly dynamic multi-contact motions, such as vaulting and dive-rolling, on unstructured terrain.
- Score: 33.232856360240106
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Current approaches to humanoid control generally fall into two paradigms: perceptive locomotion, which handles terrain well but is limited to pedal gaits, and general motion tracking, which reproduces complex skills but ignores environmental capabilities. This work unites these paradigms to achieve perceptive general motion control. We present a framework where exteroceptive sensing is integrated into whole-body motion tracking, permitting a humanoid to perform highly dynamic, non-locomotion tasks on uneven terrain. By training a single policy to perform multiple distinct motions across varied terrestrial features, we demonstrate the non-trivial benefit of integrating perception into the control loop. Our results show that this framework enables robust, highly dynamic multi-contact motions, such as vaulting and dive-rolling, on unstructured terrain, significantly expanding the robot's traversability beyond simple walking or running. https://project-instinct.github.io/deep-whole-body-parkour
Related papers
- Perceptive Humanoid Parkour: Chaining Dynamic Human Skills via Motion Matching [77.28042137892943]
We present Perceptive Humanoid Parkour (PHP), a modular framework that enables humanoid robots to autonomously perform long-horizon, vision-based parkour.<n>We train motion-tracking reinforcement learning expert policies for these composed motions, and distill them into a single depth-based, multi-skill student policy.<n>We validate our framework with extensive real-world experiments on a Unitree G1 humanoid robot.
arXiv Detail & Related papers (2026-02-17T18:59:11Z) - TextOp: Real-time Interactive Text-Driven Humanoid Robot Motion Generation and Control [62.93681680333618]
TextOp is a real-time text-driven humanoid motion generation and control framework.<n>It supports streaming language commands and on-the-fly instruction modification during execution.<n>By bridging interactive motion generation with robust whole-body control, TextOp unlocks free-form intent expression.
arXiv Detail & Related papers (2026-02-07T08:42:11Z) - KungfuBot2: Learning Versatile Motion Skills for Humanoid Whole-Body Control [30.738592041595933]
We present VMS, a unified whole-body controller that enables humanoid robots to learn diverse and dynamic behaviors within a single policy.<n>Our framework integrates a hybrid tracking objective that balances local motion fidelity with global trajectory consistency.<n>We validate VMS specialization extensively in both simulation and real-world experiments, demonstrating accurate imitation of dynamic skills, stable performance over minute-long sequences, and strong generalization to unseen motions.
arXiv Detail & Related papers (2025-09-20T11:31:14Z) - KungfuBot: Physics-Based Humanoid Whole-Body Control for Learning Highly-Dynamic Skills [58.73043119128804]
This paper presents a physics-based humanoid control framework, aiming to master highly-dynamic human behaviors such as Kungfu and dancing.<n>For motion processing, we design a pipeline to extract, filter out, correct, and retarget motions, while ensuring compliance with physical constraints.<n>For motion imitation, we formulate a bi-level optimization problem to dynamically adjust the tracking accuracy tolerance.<n>In experiments, we train whole-body control policies to imitate a set of highly-dynamic motions.
arXiv Detail & Related papers (2025-06-15T13:58:53Z) - Humanoid Whole-Body Locomotion on Narrow Terrain via Dynamic Balance and Reinforcement Learning [54.26816599309778]
We propose a novel whole-body locomotion algorithm based on dynamic balance and Reinforcement Learning (RL)<n> Specifically, we introduce a dynamic balance mechanism by leveraging an extended measure of Zero-Moment Point (ZMP)-driven rewards and task-driven rewards in a whole-body actor-critic framework.<n> Experiments conducted on a full-sized Unitree H1-2 robot verify the ability of our method to maintain balance on extremely narrow terrains.
arXiv Detail & Related papers (2025-02-24T14:53:45Z) - Universal Humanoid Motion Representations for Physics-Based Control [71.46142106079292]
We present a universal motion representation that encompasses a comprehensive range of motor skills for physics-based humanoid control.
We first learn a motion imitator that can imitate all of human motion from a large, unstructured motion dataset.
We then create our motion representation by distilling skills directly from the imitator.
arXiv Detail & Related papers (2023-10-06T20:48:43Z) - Legs as Manipulator: Pushing Quadrupedal Agility Beyond Locomotion [34.33972863987201]
We train quadruped robots to use the front legs to climb walls, press buttons, and perform object interaction in the real world.
These skills are trained in simulation using curriculum and transferred to the real world using our proposed sim2real variant.
We evaluate our method in both simulation and real-world showing successful executions of both short as well as long-range tasks.
arXiv Detail & Related papers (2023-03-20T17:59:58Z) - Learning Quadrupedal Locomotion over Challenging Terrain [68.51539602703662]
Legged locomotion can dramatically expand the operational domains of robotics.
Conventional controllers for legged locomotion are based on elaborate state machines that explicitly trigger the execution of motion primitives and reflexes.
Here we present a radically robust controller for legged locomotion in challenging natural environments.
arXiv Detail & Related papers (2020-10-21T19:11:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.