Goal-Aware Prediction: Learning to Model What Matters
- URL: http://arxiv.org/abs/2007.07170v2
- Date: Mon, 10 Aug 2020 23:15:15 GMT
- Title: Goal-Aware Prediction: Learning to Model What Matters
- Authors: Suraj Nair, Silvio Savarese, Chelsea Finn
- Abstract summary: One of the fundamental challenges in using a learned forward dynamics model is the mismatch between the objective of the learned model and that of the downstream planner or policy.
We propose to direct prediction towards task relevant information, enabling the model to be aware of the current task and encouraging it to only model relevant quantities of the state space.
We find that our method more effectively models the relevant parts of the scene conditioned on the goal, and as a result outperforms standard task-agnostic dynamics models and model-free reinforcement learning.
- Score: 105.43098326577434
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learned dynamics models combined with both planning and policy learning
algorithms have shown promise in enabling artificial agents to learn to perform
many diverse tasks with limited supervision. However, one of the fundamental
challenges in using a learned forward dynamics model is the mismatch between
the objective of the learned model (future state reconstruction), and that of
the downstream planner or policy (completing a specified task). This issue is
exacerbated by vision-based control tasks in diverse real-world environments,
where the complexity of the real world dwarfs model capacity. In this paper, we
propose to direct prediction towards task relevant information, enabling the
model to be aware of the current task and encouraging it to only model relevant
quantities of the state space, resulting in a learning objective that more
closely matches the downstream task. Further, we do so in an entirely
self-supervised manner, without the need for a reward function or image labels.
We find that our method more effectively models the relevant parts of the scene
conditioned on the goal, and as a result outperforms standard task-agnostic
dynamics models and model-free reinforcement learning.
Related papers
- Building Minimal and Reusable Causal State Abstractions for
Reinforcement Learning [63.58935783293342]
Causal Bisimulation Modeling (CBM) is a method that learns the causal relationships in the dynamics and reward functions for each task to derive a minimal, task-specific abstraction.
CBM's learned implicit dynamics models identify the underlying causal relationships and state abstractions more accurately than explicit ones.
arXiv Detail & Related papers (2024-01-23T05:43:15Z) - MoMA: Momentum Contrastive Learning with Multi-head Attention-based
Knowledge Distillation for Histopathology Image Analysis [5.396167537615578]
A lack of quality data is a common issue when it comes to a specific task in computational pathology.
We propose to exploit knowledge distillation, i.e., utilize the existing model to learn a new, target model.
We employ a student-teacher framework to learn a target model from a pre-trained, teacher model without direct access to source data.
arXiv Detail & Related papers (2023-08-31T08:54:59Z) - Self-Supervised Reinforcement Learning that Transfers using Random
Features [41.00256493388967]
We propose a self-supervised reinforcement learning method that enables the transfer of behaviors across tasks with different rewards.
Our method is self-supervised in that it can be trained on offline datasets without reward labels, but can then be quickly deployed on new tasks.
arXiv Detail & Related papers (2023-05-26T20:37:06Z) - Towards Efficient Task-Driven Model Reprogramming with Foundation Models [52.411508216448716]
Vision foundation models exhibit impressive power, benefiting from the extremely large model capacity and broad training data.
However, in practice, downstream scenarios may only support a small model due to the limited computational resources or efficiency considerations.
This brings a critical challenge for the real-world application of foundation models: one has to transfer the knowledge of a foundation model to the downstream task.
arXiv Detail & Related papers (2023-04-05T07:28:33Z) - Predictive Experience Replay for Continual Visual Control and
Forecasting [62.06183102362871]
We present a new continual learning approach for visual dynamics modeling and explore its efficacy in visual control and forecasting.
We first propose the mixture world model that learns task-specific dynamics priors with a mixture of Gaussians, and then introduce a new training strategy to overcome catastrophic forgetting.
Our model remarkably outperforms the naive combinations of existing continual learning and visual RL algorithms on DeepMind Control and Meta-World benchmarks with continual visual control tasks.
arXiv Detail & Related papers (2023-03-12T05:08:03Z) - Procedure Planning in Instructional Videosvia Contextual Modeling and
Model-based Policy Learning [114.1830997893756]
This work focuses on learning a model to plan goal-directed actions in real-life videos.
We propose novel algorithms to model human behaviors through Bayesian Inference and model-based Imitation Learning.
arXiv Detail & Related papers (2021-10-05T01:06:53Z) - Model-Based Visual Planning with Self-Supervised Functional Distances [104.83979811803466]
We present a self-supervised method for model-based visual goal reaching.
Our approach learns entirely using offline, unlabeled data.
We find that this approach substantially outperforms both model-free and model-based prior methods.
arXiv Detail & Related papers (2020-12-30T23:59:09Z) - Planning from Pixels using Inverse Dynamics Models [44.16528631970381]
We propose a novel way to learn latent world models by learning to predict sequences of future actions conditioned on task completion.
We evaluate our method on challenging visual goal completion tasks and show a substantial increase in performance compared to prior model-free approaches.
arXiv Detail & Related papers (2020-12-04T06:07:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.