JAEGER: Dual-Level Humanoid Whole-Body Controller
- URL: http://arxiv.org/abs/2505.06584v2
- Date: Mon, 16 Jun 2025 15:42:47 GMT
- Title: JAEGER: Dual-Level Humanoid Whole-Body Controller
- Authors: Ziluo Ding, Haobin Jiang, Yuxuan Wang, Zhenguo Sun, Yu Zhang, Xiaojie Niu, Ming Yang, Weishuai Zeng, Xinrun Xu, Zongqing Lu,
- Abstract summary: JAEGER is a dual-level whole-body controller for humanoid robots.<n>It addresses the challenges of training a more robust and versatile policy.
- Score: 32.03749020468113
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents JAEGER, a dual-level whole-body controller for humanoid robots that addresses the challenges of training a more robust and versatile policy. Unlike traditional single-controller approaches, JAEGER separates the control of the upper and lower bodies into two independent controllers, so that they can better focus on their distinct tasks. This separation alleviates the dimensionality curse and improves fault tolerance. JAEGER supports both root velocity tracking (coarse-grained control) and local joint angle tracking (fine-grained control), enabling versatile and stable movements. To train the controller, we utilize a human motion dataset (AMASS), retargeting human poses to humanoid poses through an efficient retargeting network, and employ a curriculum learning approach. This method performs supervised learning for initialization, followed by reinforcement learning for further exploration. We conduct our experiments on two humanoid platforms and demonstrate the superiority of our approach against state-of-the-art methods in both simulation and real environments.
Related papers
- DexTrack: Towards Generalizable Neural Tracking Control for Dexterous Manipulation from Human References [18.947295547196774]
We address the challenge of developing a generalizable neural tracking controller for dexterous manipulation from human references.<n>We introduce an approach that curates large-scale successful robot tracking demonstrations.<n>Our method achieves over a 10% improvement in success rates compared to leading baselines.
arXiv Detail & Related papers (2025-02-13T18:59:13Z) - Learning Humanoid Standing-up Control across Diverse Postures [27.79222176982376]
Standing-up control is crucial for humanoid robots, with the potential for integration into current locomotion and loco-manipulation systems.<n>We present HoST (Humanoid Standing-up Control), a reinforcement learning framework that learns standing-up control from scratch.<n>Our experimental results demonstrate that the controllers achieve smooth, stable, and robust standing-up motions across a wide range of laboratory and outdoor environments.
arXiv Detail & Related papers (2025-02-12T13:10:09Z) - DexterityGen: Foundation Controller for Unprecedented Dexterity [67.15251368211361]
Teaching robots dexterous manipulation skills, such as tool use, presents a significant challenge.<n>Current approaches can be broadly categorized into two strategies: human teleoperation (for imitation learning) and sim-to-real reinforcement learning.<n>We introduce DexterityGen, which uses RL to pretrain large-scale dexterous motion primitives, such as in-hand rotation or translation.<n>In the real world, we use human teleoperation as a prompt to the controller to produce highly dexterous behavior.
arXiv Detail & Related papers (2025-02-06T18:49:35Z) - ExBody2: Advanced Expressive Humanoid Whole-Body Control [16.69009772546575]
We propose a method for producing whole-body tracking controllers that are trained on both human motion capture and simulated data.<n>We use a teacher policy to produce intermediate data that better conforms to the robot's kinematics.<n>We observed significant improvement of tracking performance after fine-tuning on a small amount of data.
arXiv Detail & Related papers (2024-12-17T18:59:51Z) - Agile and versatile bipedal robot tracking control through reinforcement learning [12.831810518025309]
This paper proposes a versatile controller for bipedal robots.
It achieves ankle and body trajectory tracking across a wide range of gaits using a single small-scale neural network.
Highly flexible gait control can be achieved by combining minimal control units with high-level policy.
arXiv Detail & Related papers (2024-04-12T05:25:03Z) - Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - Learning and Adapting Agile Locomotion Skills by Transferring Experience [71.8926510772552]
We propose a framework for training complex robotic skills by transferring experience from existing controllers to jumpstart learning new tasks.
We show that our method enables learning complex agile jumping behaviors, navigating to goal locations while walking on hind legs, and adapting to new environments.
arXiv Detail & Related papers (2023-04-19T17:37:54Z) - Reinforcement Learning for Robust Parameterized Locomotion Control of
Bipedal Robots [121.42930679076574]
We present a model-free reinforcement learning framework for training robust locomotion policies in simulation.
domain randomization is used to encourage the policies to learn behaviors that are robust across variations in system dynamics.
We demonstrate this on versatile walking behaviors such as tracking a target walking velocity, walking height, and turning yaw.
arXiv Detail & Related papers (2021-03-26T07:14:01Z) - Learning a Contact-Adaptive Controller for Robust, Efficient Legged
Locomotion [95.1825179206694]
We present a framework that synthesizes robust controllers for a quadruped robot.
A high-level controller learns to choose from a set of primitives in response to changes in the environment.
A low-level controller that utilizes an established control method to robustly execute the primitives.
arXiv Detail & Related papers (2020-09-21T16:49:26Z) - ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for
Mobile Manipulation [99.2543521972137]
ReLMoGen is a framework that combines a learned policy to predict subgoals and a motion generator to plan and execute the motion needed to reach these subgoals.
Our method is benchmarked on a diverse set of seven robotics tasks in photo-realistic simulation environments.
ReLMoGen shows outstanding transferability between different motion generators at test time, indicating a great potential to transfer to real robots.
arXiv Detail & Related papers (2020-08-18T08:05:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.