GraspGF: Learning Score-based Grasping Primitive for Human-assisting
Dexterous Grasping
- URL: http://arxiv.org/abs/2309.06038v3
- Date: Wed, 15 Nov 2023 00:28:32 GMT
- Title: GraspGF: Learning Score-based Grasping Primitive for Human-assisting
Dexterous Grasping
- Authors: Tianhao Wu, Mingdong Wu, Jiyao Zhang, Yunchong Gan, Hao Dong
- Abstract summary: We propose a novel task called human-assisting dexterous grasping.
It aims to train a policy for controlling a robotic hand's fingers to assist users in grasping objects.
- Score: 11.63059055320262
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The use of anthropomorphic robotic hands for assisting individuals in
situations where human hands may be unavailable or unsuitable has gained
significant importance. In this paper, we propose a novel task called
human-assisting dexterous grasping that aims to train a policy for controlling
a robotic hand's fingers to assist users in grasping objects. Unlike
conventional dexterous grasping, this task presents a more complex challenge as
the policy needs to adapt to diverse user intentions, in addition to the
object's geometry. We address this challenge by proposing an approach
consisting of two sub-modules: a hand-object-conditional grasping primitive
called Grasping Gradient Field~(GraspGF), and a history-conditional residual
policy. GraspGF learns `how' to grasp by estimating the gradient from a success
grasping example set, while the residual policy determines `when' and at what
speed the grasping action should be executed based on the trajectory history.
Experimental results demonstrate the superiority of our proposed method
compared to baselines, highlighting the user-awareness and practicality in
real-world applications. The codes and demonstrations can be viewed at
"https://sites.google.com/view/graspgf".
Related papers
- COMBO-Grasp: Learning Constraint-Based Manipulation for Bimanual Occluded Grasping [56.907940167333656]
Occluded robot grasping is where the desired grasp poses are kinematically infeasible due to environmental constraints such as surface collisions.
Traditional robot manipulation approaches struggle with the complexity of non-prehensile or bimanual strategies commonly used by humans.
We introduce Constraint-based Manipulation for Bimanual Occluded Grasping (COMBO-Grasp), a learning-based approach which leverages two coordinated policies.
arXiv Detail & Related papers (2025-02-12T01:31:01Z) - A Powered Prosthetic Hand with Vision System for Enhancing the Anthropopathic Grasp [11.354158680070652]
We propose the Spatial Geometry-based Gesture Mapping (SG-GM) method, which constructs gesture functions based on the geometric features of the human hand grasping processes.
We also propose the Motion Trajectory Regression-based Grasping Intent Estimation (MTR-GIE) algorithm.
The experiments were conducted to grasp 8 common daily objects including cup, fork, etc.
arXiv Detail & Related papers (2024-12-10T01:45:14Z) - Learning to Assist Humans without Inferring Rewards [65.28156318196397]
We build upon prior work that studies assistance through the lens of empowerment.
An assistive agent aims to maximize the influence of the human's actions.
We prove that these representations estimate a similar notion of empowerment to that studied by prior work.
arXiv Detail & Related papers (2024-11-04T21:31:04Z) - Learning Goal-oriented Bimanual Dough Rolling Using Dynamic Heterogeneous Graph Based on Human Demonstration [19.74767906744719]
Soft object manipulation poses significant challenges for robots, requiring effective techniques for state representation and manipulation policy learning.
This research paper introduces a novel approach: a dynamic heterogeneous graph-based model for learning goal-oriented soft object manipulation policies.
arXiv Detail & Related papers (2024-10-15T16:12:00Z) - Dreamitate: Real-World Visuomotor Policy Learning via Video Generation [49.03287909942888]
We propose a visuomotor policy learning framework that fine-tunes a video diffusion model on human demonstrations of a given task.
We generate an example of an execution of the task conditioned on images of a novel scene, and use this synthesized execution directly to control the robot.
arXiv Detail & Related papers (2024-06-24T17:59:45Z) - Efficient Representations of Object Geometry for Reinforcement Learning
of Interactive Grasping Policies [29.998917158604694]
We present a reinforcement learning framework that learns the interactive grasping of various geometrically distinct real-world objects.
Videos of learned interactive policies are available at https://maltemosbach.org/io/geometry_aware_grasping_policies.
arXiv Detail & Related papers (2022-11-20T11:47:33Z) - Human-in-the-Loop Imitation Learning using Remote Teleoperation [72.2847988686463]
We build a data collection system tailored to 6-DoF manipulation settings.
We develop an algorithm to train the policy iteratively on new data collected by the system.
We demonstrate that agents trained on data collected by our intervention-based system and algorithm outperform agents trained on an equivalent number of samples collected by non-interventional demonstrators.
arXiv Detail & Related papers (2020-12-12T05:30:35Z) - Learning Dexterous Grasping with Object-Centric Visual Affordances [86.49357517864937]
Dexterous robotic hands are appealing for their agility and human-like morphology.
We introduce an approach for learning dexterous grasping.
Our key idea is to embed an object-centric visual affordance model within a deep reinforcement learning loop.
arXiv Detail & Related papers (2020-09-03T04:00:40Z) - AvE: Assistance via Empowerment [77.08882807208461]
We propose a new paradigm for assistance by instead increasing the human's ability to control their environment.
This task-agnostic objective preserves the person's autonomy and ability to achieve any eventual state.
arXiv Detail & Related papers (2020-06-26T04:40:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.