D-Grasp: Physically Plausible Dynamic Grasp Synthesis for Hand-Object
Interactions
- URL: http://arxiv.org/abs/2112.03028v1
- Date: Wed, 1 Dec 2021 17:04:39 GMT
- Title: D-Grasp: Physically Plausible Dynamic Grasp Synthesis for Hand-Object
Interactions
- Authors: Sammy Christen, Muhammed Kocabas, Emre Aksan, Jemin Hwangbo, Jie Song,
Otmar Hilliges
- Abstract summary: We introduce the dynamic synthesis grasp task.
Given an object with a known 6D pose and a grasp reference, our goal is to generate motions that move the object to a target 6D pose.
A hierarchical approach decomposes the task into low-level grasping and high-level motion synthesis.
- Score: 47.55376158184854
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce the dynamic grasp synthesis task: given an object with a known
6D pose and a grasp reference, our goal is to generate motions that move the
object to a target 6D pose. This is challenging, because it requires reasoning
about the complex articulation of the human hand and the intricate physical
interaction with the object. We propose a novel method that frames this problem
in the reinforcement learning framework and leverages a physics simulation,
both to learn and to evaluate such dynamic interactions. A hierarchical
approach decomposes the task into low-level grasping and high-level motion
synthesis. It can be used to generate novel hand sequences that approach,
grasp, and move an object to a desired location, while retaining
human-likeness. We show that our approach leads to stable grasps and generates
a wide range of motions. Furthermore, even imperfect labels can be corrected by
our method to generate dynamic interaction sequences. Video is available at
https://eth-ait.github.io/d-grasp/ .
Related papers
- EgoGaussian: Dynamic Scene Understanding from Egocentric Video with 3D Gaussian Splatting [95.44545809256473]
EgoGaussian is a method capable of simultaneously reconstructing 3D scenes and dynamically tracking 3D object motion from RGB egocentric input alone.
We show significant improvements in terms of both dynamic object and background reconstruction quality compared to the state-of-the-art.
arXiv Detail & Related papers (2024-06-28T10:39:36Z) - Controllable Human-Object Interaction Synthesis [77.56877961681462]
We propose Controllable Human-Object Interaction Synthesis (CHOIS) to generate synchronized object motion and human motion in 3D scenes.
Here, language descriptions inform style and intent, and waypoints, which can be effectively extracted from high-level planning, ground the motion in the scene.
Our module seamlessly integrates with a path planning module, enabling the generation of long-term interactions in 3D environments.
arXiv Detail & Related papers (2023-12-06T21:14:20Z) - CG-HOI: Contact-Guided 3D Human-Object Interaction Generation [29.3564427724612]
We propose CG-HOI, the first method to generate dynamic 3D human-object interactions (HOIs) from text.
We model the motion of both human and object in an interdependent fashion, as semantically rich human motion rarely happens in isolation.
We show that our joint contact-based human-object interaction approach generates realistic and physically plausible sequences.
arXiv Detail & Related papers (2023-11-27T18:59:10Z) - Physically Plausible Full-Body Hand-Object Interaction Synthesis [32.83908152822006]
We propose a physics-based method for synthesizing dexterous hand-object interactions in a full-body setting.
Existing methods often focus on isolated segments of the interaction process and rely on data-driven techniques that may result in artifacts.
arXiv Detail & Related papers (2023-09-14T17:55:18Z) - Novel-view Synthesis and Pose Estimation for Hand-Object Interaction
from Sparse Views [41.50710846018882]
We propose a neural rendering and pose estimation system for hand-object interaction from sparse views.
We first learn the shape and appearance prior knowledge of hands and objects separately with the neural representation.
During the online stage, we design a rendering-based joint model fitting framework to understand the dynamic hand-object interaction.
arXiv Detail & Related papers (2023-08-22T05:17:41Z) - NIFTY: Neural Object Interaction Fields for Guided Human Motion
Synthesis [21.650091018774972]
We create a neural interaction field attached to a specific object, which outputs the distance to the valid interaction manifold given a human pose as input.
This interaction field guides the sampling of an object-conditioned human motion diffusion model.
We synthesize realistic motions for sitting and lifting with several objects, outperforming alternative approaches in terms of motion quality and successful action completion.
arXiv Detail & Related papers (2023-07-14T17:59:38Z) - Task-Oriented Human-Object Interactions Generation with Implicit Neural
Representations [61.659439423703155]
TOHO: Task-Oriented Human-Object Interactions Generation with Implicit Neural Representations.
Our method generates continuous motions that are parameterized only by the temporal coordinate.
This work takes a step further toward general human-scene interaction simulation.
arXiv Detail & Related papers (2023-03-23T09:31:56Z) - IMoS: Intent-Driven Full-Body Motion Synthesis for Human-Object
Interactions [69.95820880360345]
We present the first framework to synthesize the full-body motion of virtual human characters with 3D objects placed within their reach.
Our system takes as input textual instructions specifying the objects and the associated intentions of the virtual characters.
We show that our synthesized full-body motions appear more realistic to the participants in more than 80% of scenarios.
arXiv Detail & Related papers (2022-12-14T23:59:24Z) - SAGA: Stochastic Whole-Body Grasping with Contact [60.43627793243098]
Human grasping synthesis has numerous applications including AR/VR, video games, and robotics.
In this work, our goal is to synthesize whole-body grasping motion. Given a 3D object, we aim to generate diverse and natural whole-body human motions that approach and grasp the object.
arXiv Detail & Related papers (2021-12-19T10:15:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.