Model-Based Visual Planning with Self-Supervised Functional Distances
- URL: http://arxiv.org/abs/2012.15373v1
- Date: Wed, 30 Dec 2020 23:59:09 GMT
- Title: Model-Based Visual Planning with Self-Supervised Functional Distances
- Authors: Stephen Tian, Suraj Nair, Frederik Ebert, Sudeep Dasari, Benjamin
Eysenbach, Chelsea Finn, Sergey Levine
- Abstract summary: We present a self-supervised method for model-based visual goal reaching.
Our approach learns entirely using offline, unlabeled data.
We find that this approach substantially outperforms both model-free and model-based prior methods.
- Score: 104.83979811803466
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A generalist robot must be able to complete a variety of tasks in its
environment. One appealing way to specify each task is in terms of a goal
observation. However, learning goal-reaching policies with reinforcement
learning remains a challenging problem, particularly when hand-engineered
reward functions are not available. Learned dynamics models are a promising
approach for learning about the environment without rewards or task-directed
data, but planning to reach goals with such a model requires a notion of
functional similarity between observations and goal states. We present a
self-supervised method for model-based visual goal reaching, which uses both a
visual dynamics model as well as a dynamical distance function learned using
model-free reinforcement learning. Our approach learns entirely using offline,
unlabeled data, making it practical to scale to large and diverse datasets. In
our experiments, we find that our method can successfully learn models that
perform a variety of tasks at test-time, moving objects amid distractors with a
simulated robotic arm and even learning to open and close a drawer using a
real-world robot. In comparisons, we find that this approach substantially
outperforms both model-free and model-based prior methods. Videos and
visualizations are available here: http://sites.google.com/berkeley.edu/mbold.
Related papers
- SOLD: Reinforcement Learning with Slot Object-Centric Latent Dynamics [16.020835290802548]
Slot-Attention for Object-centric Latent Dynamics is a novel algorithm that learns object-centric dynamics models from pixel inputs.
We demonstrate that the structured latent space not only improves model interpretability but also provides a valuable input space for behavior models to reason over.
Our results show that SOLD outperforms DreamerV3, a state-of-the-art model-based RL algorithm, across a range of benchmark robotic environments.
arXiv Detail & Related papers (2024-10-11T14:03:31Z) - Masked World Models for Visual Control [90.13638482124567]
We introduce a visual model-based RL framework that decouples visual representation learning and dynamics learning.
We demonstrate that our approach achieves state-of-the-art performance on a variety of visual robotic tasks.
arXiv Detail & Related papers (2022-06-28T18:42:27Z) - Affordance Learning from Play for Sample-Efficient Policy Learning [30.701546777177555]
We use a self-supervised visual affordance model from human teleoperated play data to enable efficient policy learning and motion planning.
We combine model-based planning with model-free deep reinforcement learning to learn policies that favor the same object regions favored by people.
We find that our policies train 4x faster than the baselines and generalize better to novel objects because our visual affordance model can anticipate their affordance regions.
arXiv Detail & Related papers (2022-03-01T11:00:35Z) - Zero Experience Required: Plug & Play Modular Transfer Learning for
Semantic Visual Navigation [97.17517060585875]
We present a unified approach to visual navigation using a novel modular transfer learning model.
Our model can effectively leverage its experience from one source task and apply it to multiple target tasks.
Our approach learns faster, generalizes better, and outperforms SoTA models by a significant margin.
arXiv Detail & Related papers (2022-02-05T00:07:21Z) - Dynamic Modeling of Hand-Object Interactions via Tactile Sensing [133.52375730875696]
In this work, we employ a high-resolution tactile glove to perform four different interactive activities on a diversified set of objects.
We build our model on a cross-modal learning framework and generate the labels using a visual processing pipeline to supervise the tactile model.
This work takes a step on dynamics modeling in hand-object interactions from dense tactile sensing.
arXiv Detail & Related papers (2021-09-09T16:04:14Z) - Goal-Aware Prediction: Learning to Model What Matters [105.43098326577434]
One of the fundamental challenges in using a learned forward dynamics model is the mismatch between the objective of the learned model and that of the downstream planner or policy.
We propose to direct prediction towards task relevant information, enabling the model to be aware of the current task and encouraging it to only model relevant quantities of the state space.
We find that our method more effectively models the relevant parts of the scene conditioned on the goal, and as a result outperforms standard task-agnostic dynamics models and model-free reinforcement learning.
arXiv Detail & Related papers (2020-07-14T16:42:59Z) - Learning Predictive Models From Observation and Interaction [137.77887825854768]
Learning predictive models from interaction with the world allows an agent, such as a robot, to learn about how the world works.
However, learning a model that captures the dynamics of complex skills represents a major challenge.
We propose a method to augment the training set with observational data of other agents, such as humans.
arXiv Detail & Related papers (2019-12-30T01:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.