ContactOpt: Optimizing Contact to Improve Grasps
- URL: http://arxiv.org/abs/2104.07267v1
- Date: Thu, 15 Apr 2021 06:40:51 GMT
- Title: ContactOpt: Optimizing Contact to Improve Grasps
- Authors: Patrick Grady, Chengcheng Tang, Christopher D. Twigg, Minh Vo, Samarth
Brahmbhatt, Charles C. Kemp
- Abstract summary: Physical contact between hands and objects plays a critical role in human grasps.
We show that optimizing the pose of a hand to achieve expected contact with an object can improve hand poses inferred via image-based methods.
- Score: 17.518463627346897
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Physical contact between hands and objects plays a critical role in human
grasps. We show that optimizing the pose of a hand to achieve expected contact
with an object can improve hand poses inferred via image-based methods. Given a
hand mesh and an object mesh, a deep model trained on ground truth contact data
infers desirable contact across the surfaces of the meshes. Then, ContactOpt
efficiently optimizes the pose of the hand to achieve desirable contact using a
differentiable contact model. Notably, our contact model encourages mesh
interpenetration to approximate deformable soft tissue in the hand. In our
evaluations, our methods result in grasps that better match ground truth
contact, have lower kinematic error, and are significantly preferred by human
participants. Code and models are available online.
Related papers
- Pose Priors from Language Models [74.61186408764559]
We present a zero-shot pose optimization method that enforces accurate physical contact constraints.
Our method produces surprisingly compelling pose reconstructions of people in close contact.
Unlike previous approaches, our method provides a unified framework for resolving self-contact and person-to-person contact.
arXiv Detail & Related papers (2024-05-06T17:59:36Z) - Contact-aware Human Motion Generation from Textual Descriptions [57.871692507044344]
This paper addresses the problem of generating 3D interactive human motion from text.
We create a novel dataset named RICH-CAT, representing "Contact-Aware Texts"
We propose a novel approach named CATMO for text-driven interactive human motion synthesis.
arXiv Detail & Related papers (2024-03-23T04:08:39Z) - ContactGen: Generative Contact Modeling for Grasp Generation [37.56729700157981]
This paper presents a novel object-centric contact representation ContactGen for hand-object interaction.
We propose a conditional generative model to predict ContactGen and adopt model-based optimization to predict diverse and geometrically feasible grasps.
arXiv Detail & Related papers (2023-10-05T17:59:45Z) - Nonrigid Object Contact Estimation With Regional Unwrapping Transformer [16.988812837693203]
Acquiring contact patterns between hands and nonrigid objects is a common concern in the vision and robotics community.
Existing learning-based methods focus more on contact with rigid ones from monocular images.
We propose a novel hand-object contact representation called RUPs, which unwraps the roughly estimated hand-object surfaces as multiple high-resolution 2D regional profiles.
arXiv Detail & Related papers (2023-08-27T11:37:26Z) - Learning Explicit Contact for Implicit Reconstruction of Hand-held
Objects from Monocular Images [59.49985837246644]
We show how to model contacts in an explicit way to benefit the implicit reconstruction of hand-held objects.
In the first part, we propose a new subtask of directly estimating 3D hand-object contacts from a single image.
In the second part, we introduce a novel method to diffuse estimated contact states from the hand mesh surface to nearby 3D space.
arXiv Detail & Related papers (2023-05-31T17:59:26Z) - HandNeRF: Neural Radiance Fields for Animatable Interacting Hands [122.32855646927013]
We propose a novel framework to reconstruct accurate appearance and geometry with neural radiance fields (NeRF) for interacting hands.
We conduct extensive experiments to verify the merits of our proposed HandNeRF and report a series of state-of-the-art results.
arXiv Detail & Related papers (2023-03-24T06:19:19Z) - Estimating 3D Motion and Forces of Human-Object Interactions from
Internet Videos [49.52070710518688]
We introduce a method to reconstruct the 3D motion of a person interacting with an object from a single RGB video.
Our method estimates the 3D poses of the person together with the object pose, the contact positions and the contact forces on the human body.
arXiv Detail & Related papers (2021-11-02T13:40:18Z) - Contact-Aware Retargeting of Skinned Motion [49.71236739408685]
This paper introduces a motion estimation method that preserves self-contacts and prevents interpenetration.
The method identifies self-contacts and ground contacts in the input motion, and optimize the motion to apply to the output skeleton.
In experiments, our results quantitatively outperform previous methods and we conduct a user study where our retargeted motions are rated as higher-quality than those produced by recent works.
arXiv Detail & Related papers (2021-09-15T17:05:02Z) - Hand-Object Contact Consistency Reasoning for Human Grasps Generation [6.398433415259542]
We propose to generate human grasps given a 3D object in the world.
Key observation is that it is crucial to model the consistency between the hand contact points and object contact regions.
Experiments show significant improvement in human grasp generation over state-of-the-art approaches by a large margin.
arXiv Detail & Related papers (2021-04-07T17:57:14Z) - ContactPose: A Dataset of Grasps with Object Contact and Hand Pose [27.24450178180785]
We introduce ContactPose, the first dataset of hand-object contact paired with hand pose, object pose, and RGB-D images.
ContactPose has 2306 unique grasps of 25 household objects grasped with 2 functional intents by 50 participants, and more than 2.9 M RGB-D grasp images.
arXiv Detail & Related papers (2020-07-19T01:01:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.