Latent Variable Representation for Reinforcement Learning
- URL: http://arxiv.org/abs/2212.08765v1
- Date: Sat, 17 Dec 2022 00:26:31 GMT
- Title: Latent Variable Representation for Reinforcement Learning
- Authors: Tongzheng Ren, Chenjun Xiao, Tianjun Zhang, Na Li, Zhaoran Wang, Sujay
Sanghavi, Dale Schuurmans, Bo Dai
- Abstract summary: It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
- Score: 131.03944557979725
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep latent variable models have achieved significant empirical successes in
model-based reinforcement learning (RL) due to their expressiveness in modeling
complex transition dynamics. On the other hand, it remains unclear
theoretically and empirically how latent variable models may facilitate
learning, planning, and exploration to improve the sample efficiency of RL. In
this paper, we provide a representation view of the latent variable models for
state-action value functions, which allows both tractable variational learning
algorithm and effective implementation of the optimism/pessimism principle in
the face of uncertainty for exploration. In particular, we propose a
computationally efficient planning algorithm with UCB exploration by
incorporating kernel embeddings of latent variable models. Theoretically, we
establish the sample complexity of the proposed approach in the online and
offline settings. Empirically, we demonstrate superior performance over current
state-of-the-art algorithms across various benchmarks.
Related papers
- Diffusion Spectral Representation for Reinforcement Learning [17.701625371409644]
We propose to leverage the flexibility of diffusion models for reinforcement learning from a representation learning perspective.
By exploiting the connection between diffusion models and energy-based models, we develop Diffusion Spectral Representation (Diff-SR)
We show how Diff-SR facilitates efficient policy optimization and practical algorithms while explicitly bypassing the difficulty and inference cost of sampling from the diffusion model.
arXiv Detail & Related papers (2024-06-23T14:24:14Z) - Model-based Reinforcement Learning for Parameterized Action Spaces [11.94388805327713]
We propose a novel model-based reinforcement learning algorithm for PAMDPs.
The agent learns a parameterized-action-conditioned dynamics model and plans with a modified Model Predictive Path Integral control.
Our empirical results on several standard benchmarks show that our algorithm achieves superior sample efficiency and performance than state-of-the-art PAMDP methods.
arXiv Detail & Related papers (2024-04-03T19:48:13Z) - Distributionally Robust Model-based Reinforcement Learning with Large
State Spaces [55.14361269378122]
Three major challenges in reinforcement learning are the complex dynamical systems with large state spaces, the costly data acquisition processes, and the deviation of real-world dynamics from the training environment deployment.
We study distributionally robust Markov decision processes with continuous state spaces under the widely used Kullback-Leibler, chi-square, and total variation uncertainty sets.
We propose a model-based approach that utilizes Gaussian Processes and the maximum variance reduction algorithm to efficiently learn multi-output nominal transition dynamics.
arXiv Detail & Related papers (2023-09-05T13:42:11Z) - Decision-Focused Model-based Reinforcement Learning for Reward Transfer [27.899494428456048]
We propose a novel robust decision-focused (RDF) algorithm that learns a transition model that achieves high returns while being robust to changes in the reward function.
We provide theoretical and empirical evidence, on a variety of simulators and real patient data, that RDF can learn simple yet effective models that can be used to plan personalized policies.
arXiv Detail & Related papers (2023-04-06T20:47:09Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - Planning with Diffusion for Flexible Behavior Synthesis [125.24438991142573]
We consider what it would look like to fold as much of the trajectory optimization pipeline as possible into the modeling problem.
The core of our technical approach lies in a diffusion probabilistic model that plans by iteratively denoising trajectories.
arXiv Detail & Related papers (2022-05-20T07:02:03Z) - Model-Invariant State Abstractions for Model-Based Reinforcement
Learning [54.616645151708994]
We introduce a new type of state abstraction called textitmodel-invariance.
This allows for generalization to novel combinations of unseen values of state variables.
We prove that an optimal policy can be learned over this model-invariance state abstraction.
arXiv Detail & Related papers (2021-02-19T10:37:54Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.