HOMIE: Humanoid Loco-Manipulation with Isomorphic Exoskeleton Cockpit
- URL: http://arxiv.org/abs/2502.13013v1
- Date: Tue, 18 Feb 2025 16:33:38 GMT
- Title: HOMIE: Humanoid Loco-Manipulation with Isomorphic Exoskeleton Cockpit
- Authors: Qingwei Ben, Feiyu Jia, Jia Zeng, Junting Dong, Dahua Lin, Jiangmiao Pang,
- Abstract summary: Current humanoid teleoperation systems either lack reliable low-level control policies, or struggle to acquire accurate whole-body control commands.<n>We propose a novel humanoid teleoperation cockpit integrates a humanoid loco-manipulation policy and a low-cost exoskeleton-based hardware system.
- Score: 52.12750762494588
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Current humanoid teleoperation systems either lack reliable low-level control policies, or struggle to acquire accurate whole-body control commands, making it difficult to teleoperate humanoids for loco-manipulation tasks. To solve these issues, we propose HOMIE, a novel humanoid teleoperation cockpit integrates a humanoid loco-manipulation policy and a low-cost exoskeleton-based hardware system. The policy enables humanoid robots to walk and squat to specific heights while accommodating arbitrary upper-body poses. This is achieved through our novel reinforcement learning-based training framework that incorporates upper-body pose curriculum, height-tracking reward, and symmetry utilization, without relying on any motion priors. Complementing the policy, the hardware system integrates isomorphic exoskeleton arms, a pair of motion-sensing gloves, and a pedal, allowing a single operator to achieve full control of the humanoid robot. Our experiments show our cockpit facilitates more stable, rapid, and precise humanoid loco-manipulation teleoperation, accelerating task completion and eliminating retargeting errors compared to inverse kinematics-based methods. We also validate the effectiveness of the data collected by our cockpit for imitation learning. Our project is fully open-sourced, demos and code can be found in https://homietele.github.io/.
Related papers
- RUKA: Rethinking the Design of Humanoid Hands with Learning [15.909251187339228]
This work presents RUKA, a tendon-driven humanoid hand that is compact, affordable, and capable.
RUKA has 5 fingers with 15 under degrees of freedom enabling diverse human-like grasps.
To address control challenges, we learn joint-to-actuator and fingertip-to-actuator models from motion-capture data collected by the MANUS glove.
arXiv Detail & Related papers (2025-04-17T17:58:59Z) - A Unified and General Humanoid Whole-Body Controller for Fine-Grained Locomotion [30.418274871034775]
We propose HugWBC: a unified and general humanoid whole-body controller for fine-grained locomotion.<n>HuGWBC enables real-world humanoid robots to produce various natural gaits, including walking (running), jumping, standing, and hopping, with customizable parameters.<n>HuGWBC also supports real-time interventions from external upper-body controllers like teleoperation, enabling loco-manipulation.
arXiv Detail & Related papers (2025-02-05T14:26:01Z) - ACE: A Cross-Platform Visual-Exoskeletons System for Low-Cost Dexterous Teleoperation [25.679146657293778]
Building efficient teleoperation systems across diverse robot platforms has become more crucial than ever.
We develop ACE, a cross-platform visual-exoskeleton system for low-cost dexterous teleoperation.
Compared to previous systems, our single system can generalize to humanoid hands, arm-hands, arm-gripper, and quadruped-gripper systems with high-precision teleoperation.
arXiv Detail & Related papers (2024-08-21T17:48:31Z) - AI-Powered Camera and Sensors for the Rehabilitation Hand Exoskeleton [0.393259574660092]
This project presents a vision-enabled rehabilitation hand exoskeleton to assist disabled persons in their hand movements.
The design goal was to create an accessible tool to help with a simple interface requiring no training.
arXiv Detail & Related papers (2024-08-09T04:47:37Z) - HumanPlus: Humanoid Shadowing and Imitation from Humans [82.47551890765202]
We introduce a full-stack system for humanoids to learn motion and autonomous skills from human data.
We first train a low-level policy in simulation via reinforcement learning using existing 40-hour human motion datasets.
We then perform supervised behavior cloning to train skill policies using egocentric vision, allowing humanoids to complete different tasks autonomously.
arXiv Detail & Related papers (2024-06-15T00:41:34Z) - OmniH2O: Universal and Dexterous Human-to-Humanoid Whole-Body Teleoperation and Learning [45.51662378032706]
We present OmniH2O, a learning-based system for whole-body humanoid teleoperation and autonomy.
Using kinematic as a universal control interface, OmniH2O enables various ways for a human to control a full-sized humanoid with dexterous hands.
We release the first humanoid whole-body control dataset, OmniH2O-6, containing six everyday tasks, and demonstrate humanoid whole-body skill learning from teleoperated datasets.
arXiv Detail & Related papers (2024-06-13T06:44:46Z) - Visual Whole-Body Control for Legged Loco-Manipulation [22.50054654508986]
We study the problem of mobile manipulation using legged robots equipped with an arm.
We propose a framework that can conduct the whole-body control autonomously with visual observations.
arXiv Detail & Related papers (2024-03-25T17:26:08Z) - HumanoidBench: Simulated Humanoid Benchmark for Whole-Body Locomotion and Manipulation [50.616995671367704]
We present a high-dimensional, simulated robot learning benchmark, HumanoidBench, featuring a humanoid robot equipped with dexterous hands.
Our findings reveal that state-of-the-art reinforcement learning algorithms struggle with most tasks, whereas a hierarchical learning approach achieves superior performance when supported by robust low-level policies.
arXiv Detail & Related papers (2024-03-15T17:45:44Z) - Expressive Whole-Body Control for Humanoid Robots [20.132927075816742]
We learn a whole-body control policy on a human-sized robot to mimic human motions as realistic as possible.
With training in simulation and Sim2Real transfer, our policy can control a humanoid robot to walk in different styles, shake hands with humans, and even dance with a human in the real world.
arXiv Detail & Related papers (2024-02-26T18:09:24Z) - InterControl: Zero-shot Human Interaction Generation by Controlling Every Joint [67.6297384588837]
We introduce a novel controllable motion generation method, InterControl, to encourage the synthesized motions maintaining the desired distance between joint pairs.
We demonstrate that the distance between joint pairs for human-wise interactions can be generated using an off-the-shelf Large Language Model.
arXiv Detail & Related papers (2023-11-27T14:32:33Z) - Giving Robots a Hand: Learning Generalizable Manipulation with
Eye-in-Hand Human Video Demonstrations [66.47064743686953]
Eye-in-hand cameras have shown promise in enabling greater sample efficiency and generalization in vision-based robotic manipulation.
Videos of humans performing tasks, on the other hand, are much cheaper to collect since they eliminate the need for expertise in robotic teleoperation.
In this work, we augment narrow robotic imitation datasets with broad unlabeled human video demonstrations to greatly enhance the generalization of eye-in-hand visuomotor policies.
arXiv Detail & Related papers (2023-07-12T07:04:53Z) - HERD: Continuous Human-to-Robot Evolution for Learning from Human
Demonstration [57.045140028275036]
We show that manipulation skills can be transferred from a human to a robot through the use of micro-evolutionary reinforcement learning.
We propose an algorithm for multi-dimensional evolution path searching that allows joint optimization of both the robot evolution path and the policy.
arXiv Detail & Related papers (2022-12-08T15:56:13Z) - Deep Whole-Body Control: Learning a Unified Policy for Manipulation and
Locomotion [25.35885216505385]
An attached arm can significantly increase the applicability of legged robots to mobile manipulation tasks.
Standard hierarchical control pipeline for such legged manipulators is to decouple the controller into that of manipulation and locomotion.
We learn a unified policy for whole-body control of a legged manipulator using reinforcement learning.
arXiv Detail & Related papers (2022-10-18T17:59:30Z) - DexVIP: Learning Dexterous Grasping with Human Hand Pose Priors from
Video [86.49357517864937]
We propose DexVIP, an approach to learn dexterous robotic grasping from human-object interaction videos.
We do this by curating grasp images from human-object interaction videos and imposing a prior over the agent's hand pose.
We demonstrate that DexVIP compares favorably to existing approaches that lack a hand pose prior or rely on specialized tele-operation equipment.
arXiv Detail & Related papers (2022-02-01T00:45:57Z) - Embodied Hands: Modeling and Capturing Hands and Bodies Together [61.32931890166915]
Humans move their hands and bodies together to communicate and solve tasks.
Most methods treat the 3D modeling and tracking of bodies and hands separately.
We formulate a model of hands and bodies interacting together and fit it to full-body 4D sequences.
arXiv Detail & Related papers (2022-01-07T18:59:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.