Sample-efficient reinforcement learning using deep Gaussian processes
- URL: http://arxiv.org/abs/2011.01226v1
- Date: Mon, 2 Nov 2020 13:37:57 GMT
- Title: Sample-efficient reinforcement learning using deep Gaussian processes
- Authors: Charles Gadd, Markus Heinonen, Harri L\"ahdesm\"aki and Samuel Kaski
- Abstract summary: Reinforcement learning provides a framework for learning to control which actions to take towards completing a task through trial-and-error.
In model-based reinforcement learning efficiency is improved by learning to simulate the world dynamics.
We introduce deep Gaussian processes where the depth of the compositions introduces model complexity while incorporating prior knowledge on the dynamics brings smoothness and structure.
- Score: 18.044018772331636
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reinforcement learning provides a framework for learning to control which
actions to take towards completing a task through trial-and-error. In many
applications observing interactions is costly, necessitating sample-efficient
learning. In model-based reinforcement learning efficiency is improved by
learning to simulate the world dynamics. The challenge is that model
inaccuracies rapidly accumulate over planned trajectories. We introduce deep
Gaussian processes where the depth of the compositions introduces model
complexity while incorporating prior knowledge on the dynamics brings
smoothness and structure. Our approach is able to sample a Bayesian posterior
over trajectories. We demonstrate highly improved early sample-efficiency over
competing methods. This is shown across a number of continuous control tasks,
including the half-cheetah whose contact dynamics have previously posed an
insurmountable problem for earlier sample-efficient Gaussian process based
models.
Related papers
- Efficient Weight-Space Laplace-Gaussian Filtering and Smoothing for Sequential Deep Learning [29.328769628694484]
Efficiently learning a sequence of related tasks, such as in continual learning, poses a significant challenge for neural nets.
We address this challenge with a grounded framework for sequentially learning related tasks based on Bayesian inference.
arXiv Detail & Related papers (2024-10-09T11:54:33Z) - Adv-KD: Adversarial Knowledge Distillation for Faster Diffusion Sampling [2.91204440475204]
Diffusion Probabilistic Models (DPMs) have emerged as a powerful class of deep generative models.
They rely on sequential denoising steps during sample generation.
We propose a novel method that integrates denoising phases directly into the model's architecture.
arXiv Detail & Related papers (2024-05-31T08:19:44Z) - Diffusion Generative Flow Samplers: Improving learning signals through
partial trajectory optimization [87.21285093582446]
Diffusion Generative Flow Samplers (DGFS) is a sampling-based framework where the learning process can be tractably broken down into short partial trajectory segments.
Our method takes inspiration from the theory developed for generative flow networks (GFlowNets)
arXiv Detail & Related papers (2023-10-04T09:39:05Z) - AdaMerging: Adaptive Model Merging for Multi-Task Learning [68.75885518081357]
This paper introduces an innovative technique called Adaptive Model Merging (AdaMerging)
It aims to autonomously learn the coefficients for model merging, either in a task-wise or layer-wise manner, without relying on the original training data.
Compared to the current state-of-the-art task arithmetic merging scheme, AdaMerging showcases a remarkable 11% improvement in performance.
arXiv Detail & Related papers (2023-10-04T04:26:33Z) - Predictive Experience Replay for Continual Visual Control and
Forecasting [62.06183102362871]
We present a new continual learning approach for visual dynamics modeling and explore its efficacy in visual control and forecasting.
We first propose the mixture world model that learns task-specific dynamics priors with a mixture of Gaussians, and then introduce a new training strategy to overcome catastrophic forgetting.
Our model remarkably outperforms the naive combinations of existing continual learning and visual RL algorithms on DeepMind Control and Meta-World benchmarks with continual visual control tasks.
arXiv Detail & Related papers (2023-03-12T05:08:03Z) - Hint assisted reinforcement learning: an application in radio astronomy [2.4366811507669115]
We propose to use hints generated by the environment as an aid to the reinforcement learning process mitigating the complexity of model construction.
Results in several environments show that we get the increased sample efficiency by using hints as compared to model free methods.
arXiv Detail & Related papers (2023-01-10T12:24:13Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Guaranteed Conservation of Momentum for Learning Particle-based Fluid
Dynamics [96.9177297872723]
We present a novel method for guaranteeing linear momentum in learned physics simulations.
We enforce conservation of momentum with a hard constraint, which we realize via antisymmetrical continuous convolutional layers.
In combination, the proposed method allows us to increase the physical accuracy of the learned simulator substantially.
arXiv Detail & Related papers (2022-10-12T09:12:59Z) - Sample Efficient Reinforcement Learning via Model-Ensemble Exploration
and Exploitation [3.728946517493471]
MEEE is a model-ensemble method that consists of optimistic exploration and weighted exploitation.
Our approach outperforms other model-free and model-based state-of-the-art methods, especially in sample complexity.
arXiv Detail & Related papers (2021-07-05T07:18:20Z) - Active Learning for Sequence Tagging with Deep Pre-trained Models and
Bayesian Uncertainty Estimates [52.164757178369804]
Recent advances in transfer learning for natural language processing in conjunction with active learning open the possibility to significantly reduce the necessary annotation budget.
We conduct an empirical study of various Bayesian uncertainty estimation methods and Monte Carlo dropout options for deep pre-trained models in the active learning framework.
We also demonstrate that to acquire instances during active learning, a full-size Transformer can be substituted with a distilled version, which yields better computational performance.
arXiv Detail & Related papers (2021-01-20T13:59:25Z) - Planning from Images with Deep Latent Gaussian Process Dynamics [2.924868086534434]
Planning is a powerful approach to control problems with known environment dynamics.
In unknown environments the agent needs to learn a model of the system dynamics to make planning applicable.
We propose to learn a deep latent Gaussian process dynamics (DLGPD) model that learns low-dimensional system dynamics from environment interactions with visual observations.
arXiv Detail & Related papers (2020-05-07T21:29:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.