Model-Invariant State Abstractions for Model-Based Reinforcement
Learning
- URL: http://arxiv.org/abs/2102.09850v1
- Date: Fri, 19 Feb 2021 10:37:54 GMT
- Title: Model-Invariant State Abstractions for Model-Based Reinforcement
Learning
- Authors: Manan Tomar, Amy Zhang, Roberto Calandra, Matthew E. Taylor, Joelle
Pineau
- Abstract summary: We introduce a new type of state abstraction called textitmodel-invariance.
This allows for generalization to novel combinations of unseen values of state variables.
We prove that an optimal policy can be learned over this model-invariance state abstraction.
- Score: 54.616645151708994
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accuracy and generalization of dynamics models is key to the success of
model-based reinforcement learning (MBRL). As the complexity of tasks
increases, learning dynamics models becomes increasingly sample inefficient for
MBRL methods. However, many tasks also exhibit sparsity in the dynamics, i.e.,
actions have only a local effect on the system dynamics. In this paper, we
exploit this property with a causal invariance perspective in the single-task
setting, introducing a new type of state abstraction called
\textit{model-invariance}. Unlike previous forms of state abstractions, a
model-invariance state abstraction leverages causal sparsity over state
variables. This allows for generalization to novel combinations of unseen
values of state variables, something that non-factored forms of state
abstractions cannot do. We prove that an optimal policy can be learned over
this model-invariance state abstraction. Next, we propose a practical method to
approximately learn a model-invariant representation for complex domains. We
validate our approach by showing improved modeling performance over standard
maximum likelihood approaches on challenging tasks, such as the MuJoCo-based
Humanoid. Furthermore, within the MBRL setting we show strong performance gains
w.r.t. sample efficiency across a host of other continuous control tasks.
Related papers
- Building Minimal and Reusable Causal State Abstractions for
Reinforcement Learning [63.58935783293342]
Causal Bisimulation Modeling (CBM) is a method that learns the causal relationships in the dynamics and reward functions for each task to derive a minimal, task-specific abstraction.
CBM's learned implicit dynamics models identify the underlying causal relationships and state abstractions more accurately than explicit ones.
arXiv Detail & Related papers (2024-01-23T05:43:15Z) - ReCoRe: Regularized Contrastive Representation Learning of World Model [21.29132219042405]
We present a world model that learns invariant features using contrastive unsupervised learning and an intervention-invariant regularizer.
Our method outperforms current state-of-the-art model-based and model-free RL methods and significantly improves on out-of-distribution point navigation tasks evaluated on the iGibson benchmark.
arXiv Detail & Related papers (2023-12-14T15:53:07Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - Causal Dynamics Learning for Task-Independent State Abstraction [61.707048209272884]
We introduce Causal Dynamics Learning for Task-Independent State Abstraction (CDL)
CDL learns a theoretically proved causal dynamics model that removes unnecessary dependencies between state variables and the action.
A state abstraction can then be derived from the learned dynamics.
arXiv Detail & Related papers (2022-06-27T17:02:53Z) - Learning Robust State Abstractions for Hidden-Parameter Block MDPs [55.31018404591743]
We leverage ideas of common structure from the HiP-MDP setting to enable robust state abstractions inspired by Block MDPs.
We derive instantiations of this new framework for both multi-task reinforcement learning (MTRL) and meta-reinforcement learning (Meta-RL) settings.
arXiv Detail & Related papers (2020-07-14T17:25:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.