CEDex: Cross-Embodiment Dexterous Grasp Generation at Scale from Human-like Contact Representations
- URL: http://arxiv.org/abs/2509.24661v1
- Date: Mon, 29 Sep 2025 12:08:04 GMT
- Title: CEDex: Cross-Embodiment Dexterous Grasp Generation at Scale from Human-like Contact Representations
- Authors: Zhiyuan Wu, Rolandos Alexandros Potamias, Xuyang Zhang, Zhongqun Zhang, Jiankang Deng, Shan Luo,
- Abstract summary: Cross-embodiment dexterous grasp synthesis refers to adaptively generating and optimizing grasps for various robotic hands.<n>We propose CEDex, a novel cross-embodiment dexterous grasp synthesis method at scale.<n>We construct the largest cross-embodiment grasp dataset to date, comprising 500K objects across four types with 20M total grasps.
- Score: 53.37721117405022
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cross-embodiment dexterous grasp synthesis refers to adaptively generating and optimizing grasps for various robotic hands with different morphologies. This capability is crucial for achieving versatile robotic manipulation in diverse environments and requires substantial amounts of reliable and diverse grasp data for effective model training and robust generalization. However, existing approaches either rely on physics-based optimization that lacks human-like kinematic understanding or require extensive manual data collection processes that are limited to anthropomorphic structures. In this paper, we propose CEDex, a novel cross-embodiment dexterous grasp synthesis method at scale that bridges human grasping kinematics and robot kinematics by aligning robot kinematic models with generated human-like contact representations. Given an object's point cloud and an arbitrary robotic hand model, CEDex first generates human-like contact representations using a Conditional Variational Auto-encoder pretrained on human contact data. It then performs kinematic human contact alignment through topological merging to consolidate multiple human hand parts into unified robot components, followed by a signed distance field-based grasp optimization with physics-aware constraints. Using CEDex, we construct the largest cross-embodiment grasp dataset to date, comprising 500K objects across four gripper types with 20M total grasps. Extensive experiments show that CEDex outperforms state-of-the-art approaches and our dataset benefits cross-embodiment grasp learning with high-quality diverse grasps.
Related papers
- MeshMimic: Geometry-Aware Humanoid Motion Learning through 3D Scene Reconstruction [54.36564144414704]
MeshMimic is an innovative framework that bridges 3D scene reconstruction and embodied intelligence to enable humanoid robots to learn coupled "motion-terrain" interactions directly from video.<n>By leveraging state-of-the-art 3D vision models, our framework precisely segments and reconstructs both human trajectories and the underlying 3D geometry of terrains and objects.
arXiv Detail & Related papers (2026-02-17T17:09:45Z) - Decoupled Generative Modeling for Human-Object Interaction Synthesis [35.78156236836254]
Existing approaches often require manually specified intermediate waypoints and place all optimization objectives on a single network.<n>We propose Decoupled Generative Modeling for Human-Object Interaction Synthesis (DecHOI)<n>A trajectory generator first produces human and object trajectories without prescribed waypoints, and an action generator conditions on these paths to synthesize detailed motions.
arXiv Detail & Related papers (2025-12-22T05:33:59Z) - DexCanvas: Bridging Human Demonstrations and Robot Learning for Dexterous Manipulation [25.208854363099352]
This dataset contains 7,000 hours of dexterous hand-object interactions seeded from 70 hours of real human demonstrations.<n>Each entry combines synchronized multi-view RGB-D, high-precision mocap with MANO hand parameters, and per-frame contact points with physically consistent force profiles.<n>Our real-to-sim pipeline uses reinforcement learning to train policies that control an actuated MANO hand in physics simulation.
arXiv Detail & Related papers (2025-10-17T16:08:14Z) - H-RDT: Human Manipulation Enhanced Bimanual Robotic Manipulation [27.585828712261232]
H-RDT (Human to Robotics Diffusion Transformer) is a novel approach that leverages human manipulation data to enhance robot manipulation capabilities.<n>Our key insight is that large-scale egocentric human manipulation videos with paired 3D hand pose annotations provide rich behavioral priors that capture natural manipulation strategies.<n>We introduce a two-stage training paradigm: (1) pre-training on large-scale egocentric human manipulation data, and (2) cross-embodiment fine-tuning on robot-specific data with modular action encoders and decoders.
arXiv Detail & Related papers (2025-07-31T13:06:59Z) - Physics-Driven Data Generation for Contact-Rich Manipulation via Trajectory Optimization [22.234170426206987]
We present a low-cost data generation pipeline that integrates physics-based simulation, human demonstrations, and model-based planning.<n>We validate the pipeline's effectiveness by training diffusion policies for challenging contact-rich manipulation tasks.<n>The trained policies are deployed zero-shot on hardware for bimanual iiwa arms, achieving high success rates with minimal human input.
arXiv Detail & Related papers (2025-02-27T18:56:01Z) - AnyDexGrasp: General Dexterous Grasping for Different Hands with Human-level Learning Efficiency [49.868970174484204]
We introduce an efficient approach for learning dexterous grasping with minimal data.<n>Our method achieves high performance with human-level learning efficiency: only hundreds of grasp attempts on 40 training objects.<n>This method demonstrates promising applications for humanoid robots, prosthetics, and other domains requiring robust, versatile robotic manipulation.
arXiv Detail & Related papers (2025-02-23T03:26:06Z) - Mitigating the Human-Robot Domain Discrepancy in Visual Pre-training for Robotic Manipulation [16.809190349155525]
We propose a novel adaptation paradigm that leverages readily available paired human-robot video data to bridge the domain gap.<n>Our method employs a human-robot contrastive alignment loss to align the semantics of human and robot videos, adapting pre-trained models to the robot domain in a parameter-efficient manner.
arXiv Detail & Related papers (2024-06-20T11:57:46Z) - RealDex: Towards Human-like Grasping for Robotic Dexterous Hand [64.33746404551343]
We introduce RealDex, a pioneering dataset capturing authentic dexterous hand grasping motions infused with human behavioral patterns.<n>RealDex holds immense promise in advancing humanoid robot for automated perception, cognition, and manipulation in real-world scenarios.
arXiv Detail & Related papers (2024-02-21T14:59:46Z) - InterControl: Zero-shot Human Interaction Generation by Controlling Every Joint [67.6297384588837]
We introduce a novel controllable motion generation method, InterControl, to encourage the synthesized motions maintaining the desired distance between joint pairs.
We demonstrate that the distance between joint pairs for human-wise interactions can be generated using an off-the-shelf Large Language Model.
arXiv Detail & Related papers (2023-11-27T14:32:33Z) - SynH2R: Synthesizing Hand-Object Motions for Learning Human-to-Robot Handovers [35.386426373890615]
Vision-based human-to-robot handover is an important and challenging task in human-robot interaction.<n>We introduce a framework that can generate plausible human grasping motions suitable for training the robot.<n>This allows us to generate synthetic training and testing data with 100x more objects than previous work.
arXiv Detail & Related papers (2023-11-09T18:57:02Z) - Model Predictive Control for Fluid Human-to-Robot Handovers [50.72520769938633]
Planning motions that take human comfort into account is not a part of the human-robot handover process.
We propose to generate smooth motions via an efficient model-predictive control framework.
We conduct human-to-robot handover experiments on a diverse set of objects with several users.
arXiv Detail & Related papers (2022-03-31T23:08:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.