Visual Prediction of Priors for Articulated Object Interaction
- URL: http://arxiv.org/abs/2006.03979v1
- Date: Sat, 6 Jun 2020 21:17:03 GMT
- Title: Visual Prediction of Priors for Articulated Object Interaction
- Authors: Caris Moses, Michael Noseworthy, Leslie Pack Kaelbling, Tom\'as
Lozano-P\'erez, and Nicholas Roy
- Abstract summary: Humans are able to build on prior experience quickly and efficiently.
Adults also exhibit this behavior when entering new spaces such as kitchens.
We develop a method, Contextual Prior Prediction, which provides a means of transferring knowledge between interactions in similar domains through vision.
- Score: 37.759459329701194
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Exploration in novel settings can be challenging without prior experience in
similar domains. However, humans are able to build on prior experience quickly
and efficiently. Children exhibit this behavior when playing with toys. For
example, given a toy with a yellow and blue door, a child will explore with no
clear objective, but once they have discovered how to open the yellow door,
they will most likely be able to open the blue door much faster. Adults also
exhibit this behavior when entering new spaces such as kitchens. We develop a
method, Contextual Prior Prediction, which provides a means of transferring
knowledge between interactions in similar domains through vision. We develop
agents that exhibit exploratory behavior with increasing efficiency, by
learning visual features that are shared across environments, and how they
correlate to actions. Our problem is formulated as a Contextual Multi-Armed
Bandit where the contexts are images, and the robot has access to a
parameterized action space. Given a novel object, the objective is to maximize
reward with few interactions. A domain which strongly exhibits correlations
between visual features and motion is kinemetically constrained mechanisms. We
evaluate our method on simulated prismatic and revolute joints.
Related papers
- Neural feels with neural fields: Visuo-tactile perception for in-hand
manipulation [57.60490773016364]
We combine vision and touch sensing on a multi-fingered hand to estimate an object's pose and shape during in-hand manipulation.
Our method, NeuralFeels, encodes object geometry by learning a neural field online and jointly tracks it by optimizing a pose graph problem.
Our results demonstrate that touch, at the very least, refines and, at the very best, disambiguates visual estimates during in-hand manipulation.
arXiv Detail & Related papers (2023-12-20T22:36:37Z) - Self-Supervised Learning of Action Affordances as Interaction Modes [25.16302650076381]
In this work, we tackle unsupervised learning of priors of useful interactions with articulated objects.
We use no supervision or privileged information; we only assume access to the depth sensor in the simulator to learn the interaction modes.
We show that our model covers most of the human interaction modes, outperforms existing state-of-the-art methods for affordance learning, and can generalize to objects never seen during training.
arXiv Detail & Related papers (2023-05-27T19:58:11Z) - Tactile-Filter: Interactive Tactile Perception for Part Mating [54.46221808805662]
Humans rely on touch and tactile sensing for a lot of dexterous manipulation tasks.
vision-based tactile sensors are being widely used for various robotic perception and control tasks.
We present a method for interactive perception using vision-based tactile sensors for a part mating task.
arXiv Detail & Related papers (2023-03-10T16:27:37Z) - Ditto in the House: Building Articulation Models of Indoor Scenes
through Interactive Perception [31.009703947432026]
This work explores building articulation models of indoor scenes through a robot's purposeful interactions.
We introduce an interactive perception approach to this task.
We demonstrate the effectiveness of our approach in both simulation and real-world scenes.
arXiv Detail & Related papers (2023-02-02T18:22:00Z) - Synthesizing Physical Character-Scene Interactions [64.26035523518846]
It is necessary to synthesize such interactions between virtual characters and their surroundings.
We present a system that uses adversarial imitation learning and reinforcement learning to train physically-simulated characters.
Our approach takes physics-based character motion generation a step closer to broad applicability.
arXiv Detail & Related papers (2023-02-02T05:21:32Z) - A Differentiable Recipe for Learning Visual Non-Prehensile Planar
Manipulation [63.1610540170754]
We focus on the problem of visual non-prehensile planar manipulation.
We propose a novel architecture that combines video decoding neural models with priors from contact mechanics.
We find that our modular and fully differentiable architecture performs better than learning-only methods on unseen objects and motions.
arXiv Detail & Related papers (2021-11-09T18:39:45Z) - Visual Perspective Taking for Opponent Behavior Modeling [22.69165968663182]
We propose an end-to-end long-term visual prediction framework for robots.
We demonstrate our approach in the context of visual hide-and-seek.
We suggest that visual behavior modeling and perspective taking skills will play a critical role in the ability of physical robots to fully integrate into real-world multi-agent activities.
arXiv Detail & Related papers (2021-05-11T16:02:32Z) - Learning Affordance Landscapes for Interaction Exploration in 3D
Environments [101.90004767771897]
Embodied agents must be able to master how their environment works.
We introduce a reinforcement learning approach for exploration for interaction.
We demonstrate our idea with AI2-iTHOR.
arXiv Detail & Related papers (2020-08-21T00:29:36Z) - Learning About Objects by Learning to Interact with Them [29.51363040054068]
Humans often learn about their world with little to no external supervision.
We present a computational framework to discover objects and learn their physical properties.
Our agent, when placed within the near photo-realistic and physics-enabled AI2-THOR environment, interacts with its world and learns about objects.
arXiv Detail & Related papers (2020-06-16T16:47:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.