ContactGen: Generative Contact Modeling for Grasp Generation
- URL: http://arxiv.org/abs/2310.03740v1
- Date: Thu, 5 Oct 2023 17:59:45 GMT
- Title: ContactGen: Generative Contact Modeling for Grasp Generation
- Authors: Shaowei Liu, Yang Zhou, Jimei Yang, Saurabh Gupta, Shenlong Wang
- Abstract summary: This paper presents a novel object-centric contact representation ContactGen for hand-object interaction.
We propose a conditional generative model to predict ContactGen and adopt model-based optimization to predict diverse and geometrically feasible grasps.
- Score: 37.56729700157981
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a novel object-centric contact representation ContactGen
for hand-object interaction. The ContactGen comprises three components: a
contact map indicates the contact location, a part map represents the contact
hand part, and a direction map tells the contact direction within each part.
Given an input object, we propose a conditional generative model to predict
ContactGen and adopt model-based optimization to predict diverse and
geometrically feasible grasps. Experimental results demonstrate our method can
generate high-fidelity and diverse human grasps for various objects. Project
page: https://stevenlsw.github.io/contactgen/
Related papers
- ClickDiff: Click to Induce Semantic Contact Map for Controllable Grasp Generation with Diffusion Models [17.438429495623755]
ClickDiff is a controllable conditional generation model that leverages a fine-grained Semantic Contact Map.
Within this framework, the Semantic Conditional Module generates reasonable contact maps based on fine-grained contact information.
We evaluate the validity of our proposed method, demonstrating the efficacy and robustness of ClickDiff, even with previously unseen objects.
arXiv Detail & Related papers (2024-07-28T02:42:29Z) - NL2Contact: Natural Language Guided 3D Hand-Object Contact Modeling with Diffusion Model [45.00669505173757]
NL2Contact is a model that generates controllable contacts by leveraging staged diffusion models.
Given a language description of the hand and contact, NL2Contact generates realistic and faithful 3D hand-object contacts.
We show applications of our model to grasp pose optimization and novel human grasp generation.
arXiv Detail & Related papers (2024-07-17T16:46:40Z) - G-HOP: Generative Hand-Object Prior for Interaction Reconstruction and Grasp Synthesis [57.07638884476174]
G-HOP is a denoising diffusion based generative prior for hand-object interactions.
We represent the human hand via a skeletal distance field to obtain a representation aligned with the signed distance field for the object.
We show that this hand-object prior can then serve as generic guidance to facilitate other tasks like reconstruction from interaction clip and human grasp synthesis.
arXiv Detail & Related papers (2024-04-18T17:59:28Z) - Contact-aware Human Motion Generation from Textual Descriptions [57.871692507044344]
This paper addresses the problem of generating 3D interactive human motion from text.
We create a novel dataset named RICH-CAT, representing "Contact-Aware Texts"
We propose a novel approach named CATMO for text-driven interactive human motion synthesis.
arXiv Detail & Related papers (2024-03-23T04:08:39Z) - ContactGen: Contact-Guided Interactive 3D Human Generation for Partners [9.13466172688693]
We introduce a new task of 3D human generation in terms of physical contact.
A given partner human can have diverse poses and different contact regions according to the type of interaction.
We propose a novel method of generating interactive 3D humans for a given partner human based on a guided diffusion framework.
arXiv Detail & Related papers (2024-01-30T17:57:46Z) - Learning Explicit Contact for Implicit Reconstruction of Hand-held
Objects from Monocular Images [59.49985837246644]
We show how to model contacts in an explicit way to benefit the implicit reconstruction of hand-held objects.
In the first part, we propose a new subtask of directly estimating 3D hand-object contacts from a single image.
In the second part, we introduce a novel method to diffuse estimated contact states from the hand mesh surface to nearby 3D space.
arXiv Detail & Related papers (2023-05-31T17:59:26Z) - Contact2Grasp: 3D Grasp Synthesis via Hand-Object Contact Constraint [18.201389966034263]
3D grasp synthesis generates grasping poses given an input object.
We introduce an intermediate variable for grasp contact areas to constrain the grasp generation.
Our method outperforms state-of-the-art methods regarding grasp generation on various metrics.
arXiv Detail & Related papers (2022-10-17T16:39:25Z) - ContactPose: A Dataset of Grasps with Object Contact and Hand Pose [27.24450178180785]
We introduce ContactPose, the first dataset of hand-object contact paired with hand pose, object pose, and RGB-D images.
ContactPose has 2306 unique grasps of 25 household objects grasped with 2 functional intents by 50 participants, and more than 2.9 M RGB-D grasp images.
arXiv Detail & Related papers (2020-07-19T01:01:14Z) - 3D Shape Reconstruction from Vision and Touch [62.59044232597045]
In 3D shape reconstruction, the complementary fusion of visual and haptic modalities remains largely unexplored.
We introduce a dataset of simulated touch and vision signals from the interaction between a robotic hand and a large array of 3D objects.
arXiv Detail & Related papers (2020-07-07T20:20:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.