Multi-Modal Learning of Keypoint Predictive Models for Visual Object
Manipulation
- URL: http://arxiv.org/abs/2011.03882v2
- Date: Fri, 25 Jun 2021 16:52:13 GMT
- Title: Multi-Modal Learning of Keypoint Predictive Models for Visual Object
Manipulation
- Authors: Sarah Bechtle, Neha Das and Franziska Meier
- Abstract summary: Humans have impressive generalization capabilities when it comes to manipulating objects in novel environments.
How to learn such body schemas for robots remains an open problem.
We develop an self-supervised approach that can extend a robot's kinematic model when grasping an object from visual latent representations.
- Score: 6.853826783413853
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans have impressive generalization capabilities when it comes to
manipulating objects and tools in completely novel environments. These
capabilities are, at least partially, a result of humans having internal models
of their bodies and any grasped object. How to learn such body schemas for
robots remains an open problem. In this work, we develop an self-supervised
approach that can extend a robot's kinematic model when grasping an object from
visual latent representations. Our framework comprises two components: (1) we
present a multi-modal keypoint detector: an autoencoder architecture trained by
fusing proprioception and vision to predict visual key points on an object; (2)
we show how we can use our learned keypoint detector to learn an extension of
the kinematic chain by regressing virtual joints from the predicted visual
keypoints. Our evaluation shows that our approach learns to consistently
predict visual keypoints on objects in the manipulator's hand, and thus can
easily facilitate learning an extended kinematic chain to include the object
grasped in various configurations, from a few seconds of visual data. Finally
we show that this extended kinematic chain lends itself for object manipulation
tasks such as placing a grasped object and present experiments in simulation
and on hardware.
Related papers
- Kinematic-aware Prompting for Generalizable Articulated Object
Manipulation with LLMs [53.66070434419739]
Generalizable articulated object manipulation is essential for home-assistant robots.
We propose a kinematic-aware prompting framework that prompts Large Language Models with kinematic knowledge of objects to generate low-level motion waypoints.
Our framework outperforms traditional methods on 8 categories seen and shows a powerful zero-shot capability for 8 unseen articulated object categories.
arXiv Detail & Related papers (2023-11-06T03:26:41Z) - Object Discovery from Motion-Guided Tokens [50.988525184497334]
We augment the auto-encoder representation learning framework with motion-guidance and mid-level feature tokenization.
Our approach enables the emergence of interpretable object-specific mid-level features.
arXiv Detail & Related papers (2023-03-27T19:14:00Z) - Masked World Models for Visual Control [90.13638482124567]
We introduce a visual model-based RL framework that decouples visual representation learning and dynamics learning.
We demonstrate that our approach achieves state-of-the-art performance on a variety of visual robotic tasks.
arXiv Detail & Related papers (2022-06-28T18:42:27Z) - KINet: Unsupervised Forward Models for Robotic Pushing Manipulation [8.572983995175909]
We introduce KINet -- an unsupervised framework to reason about object interactions based on a keypoint representation.
Our model learns to associate objects with keypoint coordinates and discovers a graph representation of the system.
By learning to perform physical reasoning in the keypoint space, our model automatically generalizes to scenarios with a different number of objects.
arXiv Detail & Related papers (2022-02-18T03:32:08Z) - Dynamic Modeling of Hand-Object Interactions via Tactile Sensing [133.52375730875696]
In this work, we employ a high-resolution tactile glove to perform four different interactive activities on a diversified set of objects.
We build our model on a cross-modal learning framework and generate the labels using a visual processing pipeline to supervise the tactile model.
This work takes a step on dynamics modeling in hand-object interactions from dense tactile sensing.
arXiv Detail & Related papers (2021-09-09T16:04:14Z) - 3D Neural Scene Representations for Visuomotor Control [78.79583457239836]
We learn models for dynamic 3D scenes purely from 2D visual observations.
A dynamics model, constructed over the learned representation space, enables visuomotor control for challenging manipulation tasks.
arXiv Detail & Related papers (2021-07-08T17:49:37Z) - VAT-Mart: Learning Visual Action Trajectory Proposals for Manipulating
3D ARTiculated Objects [19.296344218177534]
The space of 3D articulated objects is exceptionally rich in their myriad semantic categories, diverse shape geometry, and complicated part functionality.
Previous works mostly abstract kinematic structure with estimated joint parameters and part poses as the visual representations for manipulating 3D articulated objects.
We propose object-centric actionable visual priors as a novel perception-interaction handshaking point that the perception system outputs more actionable guidance than kinematic structure estimation.
arXiv Detail & Related papers (2021-06-28T07:47:31Z) - Simultaneous Multi-View Object Recognition and Grasping in Open-Ended
Domains [0.0]
We propose a deep learning architecture with augmented memory capacities to handle open-ended object recognition and grasping simultaneously.
We demonstrate the ability of our approach to grasp never-seen-before objects and to rapidly learn new object categories using very few examples on-site in both simulation and real-world settings.
arXiv Detail & Related papers (2021-06-03T14:12:11Z) - Unadversarial Examples: Designing Objects for Robust Vision [100.4627585672469]
We develop a framework that exploits the sensitivity of modern machine learning algorithms to input perturbations in order to design "robust objects"
We demonstrate the efficacy of the framework on a wide variety of vision-based tasks ranging from standard benchmarks to (in-simulation) robotics.
arXiv Detail & Related papers (2020-12-22T18:26:07Z) - S3K: Self-Supervised Semantic Keypoints for Robotic Manipulation via
Multi-View Consistency [11.357804868755155]
We advocate semantic 3D keypoints as a visual representation, and present a semi-supervised training objective.
Unlike local texture-based approaches, our model integrates contextual information from a large area.
We demonstrate that this ability to locate semantic keypoints enables high level scripting of human understandable behaviours.
arXiv Detail & Related papers (2020-09-30T14:44:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.