ArtiGrasp: Physically Plausible Synthesis of Bi-Manual Dexterous
Grasping and Articulation
- URL: http://arxiv.org/abs/2309.03891v2
- Date: Sun, 3 Mar 2024 18:53:09 GMT
- Title: ArtiGrasp: Physically Plausible Synthesis of Bi-Manual Dexterous
Grasping and Articulation
- Authors: Hui Zhang, Sammy Christen, Zicong Fan, Luocheng Zheng, Jemin Hwangbo,
Jie Song, Otmar Hilliges
- Abstract summary: ArtiGrasp is a method to synthesize bi-manual hand-object interactions that include grasping and articulation.
Our framework unifies grasping and articulation within a single policy guided by a single hand pose reference.
We show that our method can generate motions with noisy hand-object pose estimates from an off-the-shelf image-based regressor.
- Score: 29.999224233718927
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present ArtiGrasp, a novel method to synthesize bi-manual hand-object
interactions that include grasping and articulation. This task is challenging
due to the diversity of the global wrist motions and the precise finger control
that are necessary to articulate objects. ArtiGrasp leverages reinforcement
learning and physics simulations to train a policy that controls the global and
local hand pose. Our framework unifies grasping and articulation within a
single policy guided by a single hand pose reference. Moreover, to facilitate
the training of the precise finger control required for articulation, we
present a learning curriculum with increasing difficulty. It starts with
single-hand manipulation of stationary objects and continues with multi-agent
training including both hands and non-stationary objects. To evaluate our
method, we introduce Dynamic Object Grasping and Articulation, a task that
involves bringing an object into a target articulated pose. This task requires
grasping, relocation, and articulation. We show our method's efficacy towards
this task. We further demonstrate that our method can generate motions with
noisy hand-object pose estimates from an off-the-shelf image-based regressor.
Related papers
- DiffH2O: Diffusion-Based Synthesis of Hand-Object Interactions from Textual Descriptions [15.417836855005087]
We propose DiffH2O, a novel method to synthesize realistic, one or two-handed object interactions.
We decompose the task into a grasping stage and a text-based interaction stage.
In the grasping stage, the model only generates hand motions, whereas in the interaction phase both hand and object poses are synthesized.
arXiv Detail & Related papers (2024-03-26T16:06:42Z) - Modular Neural Network Policies for Learning In-Flight Object Catching
with a Robot Hand-Arm System [55.94648383147838]
We present a modular framework designed to enable a robot hand-arm system to learn how to catch flying objects.
Our framework consists of five core modules: (i) an object state estimator that learns object trajectory prediction, (ii) a catching pose quality network that learns to score and rank object poses for catching, (iii) a reaching control policy trained to move the robot hand to pre-catch poses, and (iv) a grasping control policy trained to perform soft catching motions.
We conduct extensive evaluations of our framework in simulation for each module and the integrated system, to demonstrate high success rates of in-flight
arXiv Detail & Related papers (2023-12-21T16:20:12Z) - Physically Plausible Full-Body Hand-Object Interaction Synthesis [32.83908152822006]
We propose a physics-based method for synthesizing dexterous hand-object interactions in a full-body setting.
Existing methods often focus on isolated segments of the interaction process and rely on data-driven techniques that may result in artifacts.
arXiv Detail & Related papers (2023-09-14T17:55:18Z) - GRIP: Generating Interaction Poses Using Spatial Cues and Latent Consistency [57.9920824261925]
Hands are dexterous and highly versatile manipulators that are central to how humans interact with objects and their environment.
modeling realistic hand-object interactions is critical for applications in computer graphics, computer vision, and mixed reality.
GRIP is a learning-based method that takes as input the 3D motion of the body and the object, and synthesizes realistic motion for both hands before, during, and after object interaction.
arXiv Detail & Related papers (2023-08-22T17:59:51Z) - Learning to Transfer In-Hand Manipulations Using a Greedy Shape
Curriculum [79.6027464700869]
We show that natural and robust in-hand manipulation of simple objects in a dynamic simulation can be learned from a high quality motion capture example.
We propose a simple greedy curriculum search algorithm that can successfully apply to a range of objects such as a teapot, bunny, bottle, train, and elephant.
arXiv Detail & Related papers (2023-03-14T17:08:19Z) - D-Grasp: Physically Plausible Dynamic Grasp Synthesis for Hand-Object
Interactions [47.55376158184854]
We introduce the dynamic synthesis grasp task.
Given an object with a known 6D pose and a grasp reference, our goal is to generate motions that move the object to a target 6D pose.
A hierarchical approach decomposes the task into low-level grasping and high-level motion synthesis.
arXiv Detail & Related papers (2021-12-01T17:04:39Z) - Towards unconstrained joint hand-object reconstruction from RGB videos [81.97694449736414]
Reconstructing hand-object manipulations holds a great potential for robotics and learning from human demonstrations.
We first propose a learning-free fitting approach for hand-object reconstruction which can seamlessly handle two-hand object interactions.
arXiv Detail & Related papers (2021-08-16T12:26:34Z) - Learning Dexterous Grasping with Object-Centric Visual Affordances [86.49357517864937]
Dexterous robotic hands are appealing for their agility and human-like morphology.
We introduce an approach for learning dexterous grasping.
Our key idea is to embed an object-centric visual affordance model within a deep reinforcement learning loop.
arXiv Detail & Related papers (2020-09-03T04:00:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.