Contrastive Value Learning: Implicit Models for Simple Offline RL
- URL: http://arxiv.org/abs/2211.02100v1
- Date: Thu, 3 Nov 2022 19:10:05 GMT
- Title: Contrastive Value Learning: Implicit Models for Simple Offline RL
- Authors: Bogdan Mazoure, Benjamin Eysenbach, Ofir Nachum, Jonathan Tompson
- Abstract summary: We propose Contrastive Value Learning (CVL), which learns an implicit, multi-step model of the environment dynamics.
CVL can be learned without access to reward functions, but nonetheless can be used to directly estimate the value of each action.
Our experiments demonstrate that CVL outperforms prior offline RL methods on complex continuous control benchmarks.
- Score: 40.95632543012637
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Model-based reinforcement learning (RL) methods are appealing in the offline
setting because they allow an agent to reason about the consequences of actions
without interacting with the environment. Prior methods learn a 1-step dynamics
model, which predicts the next state given the current state and action. These
models do not immediately tell the agent which actions to take, but must be
integrated into a larger RL framework. Can we model the environment dynamics in
a different way, such that the learned model does directly indicate the value
of each action? In this paper, we propose Contrastive Value Learning (CVL),
which learns an implicit, multi-step model of the environment dynamics. This
model can be learned without access to reward functions, but nonetheless can be
used to directly estimate the value of each action, without requiring any TD
learning. Because this model represents the multi-step transitions implicitly,
it avoids having to predict high-dimensional observations and thus scales to
high-dimensional tasks. Our experiments demonstrate that CVL outperforms prior
offline RL methods on complex continuous control benchmarks.
Related papers
- AdaCred: Adaptive Causal Decision Transformers with Feature Crediting [11.54181863246064]
We introduce AdaCred, a novel approach that represents trajectories as causal graphs built from short-term action-reward-state sequences.
Our experiments demonstrate that AdaCred-based policies require shorter trajectory sequences and consistently outperform conventional methods in both offline reinforcement learning and imitation learning environments.
arXiv Detail & Related papers (2024-12-19T22:22:37Z) - SOLD: Slot Object-Centric Latent Dynamics Models for Relational Manipulation Learning from Pixels [16.020835290802548]
Slot-Attention for Object-centric Latent Dynamics is a novel model-based reinforcement learning algorithm.
It learns object-centric dynamics models in an unsupervised manner from pixel inputs.
We demonstrate that the structured latent space not only improves model interpretability but also provides a valuable input space for behavior models to reason over.
arXiv Detail & Related papers (2024-10-11T14:03:31Z) - The Edge-of-Reach Problem in Offline Model-Based Reinforcement Learning [31.8260779160424]
We investigate how popular algorithms perform as the learned dynamics model is improved.
We propose Reach-Aware Learning (RAVL), a simple and robust method that directly addresses the edge-of-reach problem.
arXiv Detail & Related papers (2024-02-19T20:38:00Z) - Learning Environment Models with Continuous Stochastic Dynamics [0.0]
We aim to provide insights into the decisions faced by the agent by learning an automaton model of environmental behavior under the control of an agent.
In this work, we raise the capabilities of automata learning such that it is possible to learn models for environments that have complex and continuous dynamics.
We apply our automata learning framework on popular RL benchmarking environments in the OpenAI Gym, including LunarLander, CartPole, Mountain Car, and Acrobot.
arXiv Detail & Related papers (2023-06-29T12:47:28Z) - Predictable MDP Abstraction for Unsupervised Model-Based RL [93.91375268580806]
We propose predictable MDP abstraction (PMA)
Instead of training a predictive model on the original MDP, we train a model on a transformed MDP with a learned action space.
We theoretically analyze PMA and empirically demonstrate that PMA leads to significant improvements over prior unsupervised model-based RL approaches.
arXiv Detail & Related papers (2023-02-08T07:37:51Z) - Simplifying Model-based RL: Learning Representations, Latent-space
Models, and Policies with One Objective [142.36200080384145]
We propose a single objective which jointly optimize a latent-space model and policy to achieve high returns while remaining self-consistent.
We demonstrate that the resulting algorithm matches or improves the sample-efficiency of the best prior model-based and model-free RL methods.
arXiv Detail & Related papers (2022-09-18T03:51:58Z) - Mismatched No More: Joint Model-Policy Optimization for Model-Based RL [172.37829823752364]
We propose a single objective for jointly training the model and the policy, such that updates to either component increases a lower bound on expected return.
Our objective is a global lower bound on expected return, and this bound becomes tight under certain assumptions.
The resulting algorithm (MnM) is conceptually similar to a GAN.
arXiv Detail & Related papers (2021-10-06T13:43:27Z) - Model-Invariant State Abstractions for Model-Based Reinforcement
Learning [54.616645151708994]
We introduce a new type of state abstraction called textitmodel-invariance.
This allows for generalization to novel combinations of unseen values of state variables.
We prove that an optimal policy can be learned over this model-invariance state abstraction.
arXiv Detail & Related papers (2021-02-19T10:37:54Z) - Goal-Aware Prediction: Learning to Model What Matters [105.43098326577434]
One of the fundamental challenges in using a learned forward dynamics model is the mismatch between the objective of the learned model and that of the downstream planner or policy.
We propose to direct prediction towards task relevant information, enabling the model to be aware of the current task and encouraging it to only model relevant quantities of the state space.
We find that our method more effectively models the relevant parts of the scene conditioned on the goal, and as a result outperforms standard task-agnostic dynamics models and model-free reinforcement learning.
arXiv Detail & Related papers (2020-07-14T16:42:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.