CMG-Net: An End-to-End Contact-Based Multi-Finger Dexterous Grasping
Network
- URL: http://arxiv.org/abs/2303.13182v1
- Date: Thu, 23 Mar 2023 11:29:31 GMT
- Title: CMG-Net: An End-to-End Contact-Based Multi-Finger Dexterous Grasping
Network
- Authors: Mingze Wei, Yaomin Huang, Zhiyuan Xu, Ning Liu, Zhengping Che, Xinyu
Zhang, Chaomin Shen, Feifei Feng, Chun Shan, Jian Tang
- Abstract summary: We present an effective end-to-end network, CMG-Net, for grasping unknown objects in a cluttered environment.
We create a synthetic grasp dataset that consists of five thousand cluttered scenes, 80 object categories, and 20 million annotations.
Our work significantly outperforms the state-of-the-art for three-finger robotic hands.
- Score: 25.879649629474212
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a novel representation for grasping using contacts
between multi-finger robotic hands and objects to be manipulated. This
representation significantly reduces the prediction dimensions and accelerates
the learning process. We present an effective end-to-end network, CMG-Net, for
grasping unknown objects in a cluttered environment by efficiently predicting
multi-finger grasp poses and hand configurations from a single-shot point
cloud. Moreover, we create a synthetic grasp dataset that consists of five
thousand cluttered scenes, 80 object categories, and 20 million annotations. We
perform a comprehensive empirical study and demonstrate the effectiveness of
our grasping representation and CMG-Net. Our work significantly outperforms the
state-of-the-art for three-finger robotic hands. We also demonstrate that the
model trained using synthetic data performs very well for real robots.
Related papers
- Multi-fingered Robotic Hand Grasping in Cluttered Environments through Hand-object Contact Semantic Mapping [8.11121483911344]
We develop a novel method for generating five-fingered hand grasp samples in cluttered settings.
A key aspect of our approach is our data generation method, capable of estimating contact spatial and semantic representations.
We introduce a unique grasp detection technique that efficiently formulates mechanical hand grasp poses from these maps.
arXiv Detail & Related papers (2024-04-12T23:11:36Z) - MimicGen: A Data Generation System for Scalable Robot Learning using
Human Demonstrations [55.549956643032836]
MimicGen is a system for automatically synthesizing large-scale, rich datasets from only a small number of human demonstrations.
We show that robot agents can be effectively trained on this generated dataset by imitation learning to achieve strong performance in long-horizon and high-precision tasks.
arXiv Detail & Related papers (2023-10-26T17:17:31Z) - DMFC-GraspNet: Differentiable Multi-Fingered Robotic Grasp Generation in
Cluttered Scenes [22.835683657191936]
Multi-fingered robotic grasping can potentially perform complex object manipulation.
Current techniques for multi-fingered robotic grasping frequently predict only a single grasp for each inference time.
This paper proposes a differentiable multi-fingered grasp generation network (DMFC-GraspNet) with three main contributions to address this challenge.
arXiv Detail & Related papers (2023-08-01T11:21:07Z) - DexTransfer: Real World Multi-fingered Dexterous Grasping with Minimal
Human Demonstrations [51.87067543670535]
We propose a robot-learning system that can take a small number of human demonstrations and learn to grasp unseen object poses.
We train a dexterous grasping policy that takes the point clouds of the object as input and predicts continuous actions to grasp objects from different initial robot states.
The policy learned from our dataset can generalize well on unseen object poses in both simulation and the real world.
arXiv Detail & Related papers (2022-09-28T17:51:49Z) - Learn to Predict How Humans Manipulate Large-sized Objects from
Interactive Motions [82.90906153293585]
We propose a graph neural network, HO-GCN, to fuse motion data and dynamic descriptors for the prediction task.
We show the proposed network that consumes dynamic descriptors can achieve state-of-the-art prediction results and help the network better generalize to unseen objects.
arXiv Detail & Related papers (2022-06-25T09:55:39Z) - MetaGraspNet: A Large-Scale Benchmark Dataset for Vision-driven Robotic
Grasping via Physics-based Metaverse Synthesis [78.26022688167133]
We present a large-scale benchmark dataset for vision-driven robotic grasping via physics-based metaverse synthesis.
The proposed dataset contains 100,000 images and 25 different object types.
We also propose a new layout-weighted performance metric alongside the dataset for evaluating object detection and segmentation performance.
arXiv Detail & Related papers (2021-12-29T17:23:24Z) - Few-Shot Visual Grounding for Natural Human-Robot Interaction [0.0]
We propose a software architecture that segments a target object from a crowded scene, indicated verbally by a human user.
At the core of our system, we employ a multi-modal deep neural network for visual grounding.
We evaluate the performance of the proposed model on real RGB-D data collected from public scene datasets.
arXiv Detail & Related papers (2021-03-17T15:24:02Z) - Where is my hand? Deep hand segmentation for visual self-recognition in
humanoid robots [129.46920552019247]
We propose the use of a Convolution Neural Network (CNN) to segment the robot hand from an image in an egocentric view.
We fine-tuned the Mask-RCNN network for the specific task of segmenting the hand of the humanoid robot Vizzy.
arXiv Detail & Related papers (2021-02-09T10:34:32Z) - Visual Imitation Made Easy [102.36509665008732]
We present an alternate interface for imitation that simplifies the data collection process while allowing for easy transfer to robots.
We use commercially available reacher-grabber assistive tools both as a data collection device and as the robot's end-effector.
We experimentally evaluate on two challenging tasks: non-prehensile pushing and prehensile stacking, with 1000 diverse demonstrations for each task.
arXiv Detail & Related papers (2020-08-11T17:58:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.