ManipTrans: Efficient Dexterous Bimanual Manipulation Transfer via Residual Learning
- URL: http://arxiv.org/abs/2503.21860v1
- Date: Thu, 27 Mar 2025 17:50:30 GMT
- Title: ManipTrans: Efficient Dexterous Bimanual Manipulation Transfer via Residual Learning
- Authors: Kailin Li, Puhao Li, Tengyu Liu, Yuyang Li, Siyuan Huang,
- Abstract summary: We introduce ManipTrans, a novel method for transferring human bimanual skills to dexterous robotic hands in simulation.<n>Experiments show that ManipTrans surpasses state-of-the-art methods in success rate, fidelity, and efficiency.<n>We also create DexManipNet, a large-scale dataset featuring previously unexplored tasks like pen capping and bottle unscrewing.
- Score: 24.675197489823898
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human hands play a central role in interacting, motivating increasing research in dexterous robotic manipulation. Data-driven embodied AI algorithms demand precise, large-scale, human-like manipulation sequences, which are challenging to obtain with conventional reinforcement learning or real-world teleoperation. To address this, we introduce ManipTrans, a novel two-stage method for efficiently transferring human bimanual skills to dexterous robotic hands in simulation. ManipTrans first pre-trains a generalist trajectory imitator to mimic hand motion, then fine-tunes a specific residual module under interaction constraints, enabling efficient learning and accurate execution of complex bimanual tasks. Experiments show that ManipTrans surpasses state-of-the-art methods in success rate, fidelity, and efficiency. Leveraging ManipTrans, we transfer multiple hand-object datasets to robotic hands, creating DexManipNet, a large-scale dataset featuring previously unexplored tasks like pen capping and bottle unscrewing. DexManipNet comprises 3.3K episodes of robotic manipulation and is easily extensible, facilitating further policy training for dexterous hands and enabling real-world deployments.
Related papers
- MAPLE: Encoding Dexterous Robotic Manipulation Priors Learned From Egocentric Videos [43.836197294180316]
We present MAPLE, a novel method for dexterous robotic manipulation that exploits rich manipulation priors to enable efficient policy learning.
Specifically, we predict hand-object contact points and detailed hand poses at the moment of hand-object contact and use the learned features to train policies for downstream manipulation tasks.
arXiv Detail & Related papers (2025-04-08T14:25:25Z) - Dexterous Manipulation through Imitation Learning: A Survey [28.04590024211786]
Imitation learning (IL) offers an alternative by allowing robots to acquire dexterous manipulation skills directly from expert demonstrations.
IL captures fine-grained coordination and contact dynamics while bypassing the need for explicit modeling and large-scale trial-and-error.
Our goal is to offer researchers and practitioners a comprehensive introduction to this rapidly evolving domain.
arXiv Detail & Related papers (2025-04-04T15:14:38Z) - AnyDexGrasp: General Dexterous Grasping for Different Hands with Human-level Learning Efficiency [49.868970174484204]
We introduce an efficient approach for learning dexterous grasping with minimal data.
Our method achieves high performance with human-level learning efficiency: only hundreds of grasp attempts on 40 training objects.
This method demonstrates promising applications for humanoid robots, prosthetics, and other domains requiring robust, versatile robotic manipulation.
arXiv Detail & Related papers (2025-02-23T03:26:06Z) - DexterityGen: Foundation Controller for Unprecedented Dexterity [67.15251368211361]
Teaching robots dexterous manipulation skills, such as tool use, presents a significant challenge.<n>Current approaches can be broadly categorized into two strategies: human teleoperation (for imitation learning) and sim-to-real reinforcement learning.<n>We introduce DexterityGen, which uses RL to pretrain large-scale dexterous motion primitives, such as in-hand rotation or translation.<n>In the real world, we use human teleoperation as a prompt to the controller to produce highly dexterous behavior.
arXiv Detail & Related papers (2025-02-06T18:49:35Z) - VTAO-BiManip: Masked Visual-Tactile-Action Pre-training with Object Understanding for Bimanual Dexterous Manipulation [8.882764358932276]
Bimanual dexterous manipulation remains significant challenges in robotics due to the high DoFs of each hand and their coordination.<n>Existing single-hand manipulation techniques often leverage human demonstrations to guide RL methods but fail to generalize to complex bimanual tasks involving multiple sub-skills.<n>We introduce VTAO-BiManip, a novel framework that combines visual-tactile-action pretraining with object understanding to facilitate curriculum RL to enable human-like bimanual manipulation.
arXiv Detail & Related papers (2025-01-07T08:14:53Z) - DexMimicGen: Automated Data Generation for Bimanual Dexterous Manipulation via Imitation Learning [42.88605563822155]
We present a large-scale automated data generation system that synthesizes trajectories from human demonstrations for humanoid robots with dexterous hands.<n>We generate 21K demos across these tasks from just 60 source human demos.<n>We also present a real-to-sim-to-real pipeline and deploy it on a real-world humanoid can sorting task.
arXiv Detail & Related papers (2024-10-31T17:48:45Z) - Human-Agent Joint Learning for Efficient Robot Manipulation Skill Acquisition [48.65867987106428]
We introduce a novel system for joint learning between human operators and robots.
It enables human operators to share control of a robot end-effector with a learned assistive agent.
It reduces the need for human adaptation while ensuring the collected data is of sufficient quality for downstream tasks.
arXiv Detail & Related papers (2024-06-29T03:37:29Z) - RealDex: Towards Human-like Grasping for Robotic Dexterous Hand [64.33746404551343]
We introduce RealDex, a pioneering dataset capturing authentic dexterous hand grasping motions infused with human behavioral patterns.<n>RealDex holds immense promise in advancing humanoid robot for automated perception, cognition, and manipulation in real-world scenarios.
arXiv Detail & Related papers (2024-02-21T14:59:46Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - Dexterous Imitation Made Easy: A Learning-Based Framework for Efficient
Dexterous Manipulation [13.135013586592585]
'Dexterous Made Easy' (DIME) is a new imitation learning framework for dexterous manipulation.
DIME only requires a single RGB camera to observe a human operator and teleoperate our robotic hand.
On both simulation and real robot benchmarks we demonstrate that DIME can be used to solve complex, in-hand manipulation tasks.
arXiv Detail & Related papers (2022-03-24T17:58:54Z) - Human-in-the-Loop Imitation Learning using Remote Teleoperation [72.2847988686463]
We build a data collection system tailored to 6-DoF manipulation settings.
We develop an algorithm to train the policy iteratively on new data collected by the system.
We demonstrate that agents trained on data collected by our intervention-based system and algorithm outperform agents trained on an equivalent number of samples collected by non-interventional demonstrators.
arXiv Detail & Related papers (2020-12-12T05:30:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.