Learning Foresightful Dense Visual Affordance for Deformable Object
Manipulation
- URL: http://arxiv.org/abs/2303.11057v3
- Date: Fri, 21 Jul 2023 12:43:23 GMT
- Title: Learning Foresightful Dense Visual Affordance for Deformable Object
Manipulation
- Authors: Ruihai Wu, Chuanruo Ning, Hao Dong
- Abstract summary: We study deformable object manipulation using dense visual affordance, with generalization towards diverse states.
We propose a framework for learning this representation, with novel designs such as multi-stage stable learning and efficient self-supervised data collection without experts.
- Score: 5.181990978232578
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding and manipulating deformable objects (e.g., ropes and fabrics)
is an essential yet challenging task with broad applications. Difficulties come
from complex states and dynamics, diverse configurations and high-dimensional
action space of deformable objects. Besides, the manipulation tasks usually
require multiple steps to accomplish, and greedy policies may easily lead to
local optimal states. Existing studies usually tackle this problem using
reinforcement learning or imitating expert demonstrations, with limitations in
modeling complex states or requiring hand-crafted expert policies. In this
paper, we study deformable object manipulation using dense visual affordance,
with generalization towards diverse states, and propose a novel kind of
foresightful dense affordance, which avoids local optima by estimating states'
values for long-term manipulation. We propose a framework for learning this
representation, with novel designs such as multi-stage stable learning and
efficient self-supervised data collection without experts. Experiments
demonstrate the superiority of our proposed foresightful dense affordance.
Project page: https://hyperplane-lab.github.io/DeformableAffordance
Related papers
- DexDeform: Dexterous Deformable Object Manipulation with Human
Demonstrations and Differentiable Physics [97.75188532559952]
We propose a principled framework that abstracts dexterous manipulation skills from human demonstration.
We then train a skill model using demonstrations for planning over action abstractions in imagination.
To evaluate the effectiveness of our approach, we introduce a suite of six challenging dexterous deformable object manipulation tasks.
arXiv Detail & Related papers (2023-03-27T17:59:49Z) - Learning to Transfer In-Hand Manipulations Using a Greedy Shape
Curriculum [79.6027464700869]
We show that natural and robust in-hand manipulation of simple objects in a dynamic simulation can be learned from a high quality motion capture example.
We propose a simple greedy curriculum search algorithm that can successfully apply to a range of objects such as a teapot, bunny, bottle, train, and elephant.
arXiv Detail & Related papers (2023-03-14T17:08:19Z) - Efficient Representations of Object Geometry for Reinforcement Learning
of Interactive Grasping Policies [29.998917158604694]
We present a reinforcement learning framework that learns the interactive grasping of various geometrically distinct real-world objects.
Videos of learned interactive policies are available at https://maltemosbach.org/io/geometry_aware_grasping_policies.
arXiv Detail & Related papers (2022-11-20T11:47:33Z) - Planning with Spatial-Temporal Abstraction from Point Clouds for
Deformable Object Manipulation [64.00292856805865]
We propose PlAnning with Spatial-Temporal Abstraction (PASTA), which incorporates both spatial abstraction and temporal abstraction.
Our framework maps high-dimension 3D observations into a set of latent vectors and plans over skill sequences on top of the latent set representation.
We show that our method can effectively perform challenging deformable object manipulation tasks in the real world.
arXiv Detail & Related papers (2022-10-27T19:57:04Z) - Learning Generalizable Dexterous Manipulation from Human Grasp
Affordance [11.060931225148936]
Dexterous manipulation with a multi-finger hand is one of the most challenging problems in robotics.
Recent progress in imitation learning has largely improved the sample efficiency compared to Reinforcement Learning.
We propose to learn dexterous manipulation using large-scale demonstrations with diverse 3D objects in a category.
arXiv Detail & Related papers (2022-04-05T16:26:22Z) - Generalization in Dexterous Manipulation via Geometry-Aware Multi-Task
Learning [108.08083976908195]
We show that policies learned by existing reinforcement learning algorithms can in fact be generalist.
We show that a single generalist policy can perform in-hand manipulation of over 100 geometrically-diverse real-world objects.
Interestingly, we find that multi-task learning with object point cloud representations not only generalizes better but even outperforms single-object specialist policies.
arXiv Detail & Related papers (2021-11-04T17:59:56Z) - SoftGym: Benchmarking Deep Reinforcement Learning for Deformable Object
Manipulation [15.477950393687836]
We present SoftGym, a set of open-source simulated benchmarks for manipulating deformable objects.
We evaluate a variety of algorithms on these tasks and highlight challenges for reinforcement learning algorithms.
arXiv Detail & Related papers (2020-11-14T03:46:59Z) - State-Only Imitation Learning for Dexterous Manipulation [63.03621861920732]
In this paper, we explore state-only imitation learning.
We train an inverse dynamics model and use it to predict actions for state-only demonstrations.
Our method performs on par with state-action approaches and considerably outperforms RL alone.
arXiv Detail & Related papers (2020-04-07T17:57:20Z) - Plausible Counterfactuals: Auditing Deep Learning Classifiers with
Realistic Adversarial Examples [84.8370546614042]
Black-box nature of Deep Learning models has posed unanswered questions about what they learn from data.
Generative Adversarial Network (GAN) and multi-objectives are used to furnish a plausible attack to the audited model.
Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
arXiv Detail & Related papers (2020-03-25T11:08:56Z) - Learning Rope Manipulation Policies Using Dense Object Descriptors
Trained on Synthetic Depth Data [32.936908766549344]
We present an approach that learns point-pair correspondences between initial and goal rope configurations.
In 50 trials of a knot-tying task with the ABB YuMi Robot, the system achieves a 66% knot-tying success rate from previously unseen configurations.
arXiv Detail & Related papers (2020-03-03T23:43:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.