Planning from Images with Deep Latent Gaussian Process Dynamics
- URL: http://arxiv.org/abs/2005.03770v1
- Date: Thu, 7 May 2020 21:29:45 GMT
- Title: Planning from Images with Deep Latent Gaussian Process Dynamics
- Authors: Nathanael Bosch, Jan Achterhold, Laura Leal-Taix\'e, J\"org St\"uckler
- Abstract summary: Planning is a powerful approach to control problems with known environment dynamics.
In unknown environments the agent needs to learn a model of the system dynamics to make planning applicable.
We propose to learn a deep latent Gaussian process dynamics (DLGPD) model that learns low-dimensional system dynamics from environment interactions with visual observations.
- Score: 2.924868086534434
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Planning is a powerful approach to control problems with known environment
dynamics. In unknown environments the agent needs to learn a model of the
system dynamics to make planning applicable. This is particularly challenging
when the underlying states are only indirectly observable through images. We
propose to learn a deep latent Gaussian process dynamics (DLGPD) model that
learns low-dimensional system dynamics from environment interactions with
visual observations. The method infers latent state representations from
observations using neural networks and models the system dynamics in the
learned latent space with Gaussian processes. All parts of the model can be
trained jointly by optimizing a lower bound on the likelihood of transitions in
image space. We evaluate the proposed approach on the pendulum swing-up task
while using the learned dynamics model for planning in latent space in order to
solve the control problem. We also demonstrate that our method can quickly
adapt a trained agent to changes in the system dynamics from just a few
rollouts. We compare our approach to a state-of-the-art purely deep learning
based method and demonstrate the advantages of combining Gaussian processes
with deep learning for data efficiency and transfer learning.
Related papers
- Learning System Dynamics without Forgetting [60.08612207170659]
Predicting trajectories of systems with unknown dynamics is crucial in various research fields, including physics and biology.
We present a novel framework of Mode-switching Graph ODE (MS-GODE), which can continually learn varying dynamics.
We construct a novel benchmark of biological dynamic systems, featuring diverse systems with disparate dynamics.
arXiv Detail & Related papers (2024-06-30T14:55:18Z) - Learning visual-based deformable object rearrangement with local graph
neural networks [4.333220038316982]
We propose a novel representation strategy that can efficiently model the deformable object states with a set of keypoints and their interactions.
We also propose a light local GNN learning to jointly model the deformable rearrangement dynamics and infer the optimal manipulation actions.
Our method reaches much higher success rates on a variety of deformable rearrangement tasks (96.3% on average) than state-of-the-art method in simulation experiments.
arXiv Detail & Related papers (2023-10-16T11:42:54Z) - Predictive Experience Replay for Continual Visual Control and
Forecasting [62.06183102362871]
We present a new continual learning approach for visual dynamics modeling and explore its efficacy in visual control and forecasting.
We first propose the mixture world model that learns task-specific dynamics priors with a mixture of Gaussians, and then introduce a new training strategy to overcome catastrophic forgetting.
Our model remarkably outperforms the naive combinations of existing continual learning and visual RL algorithms on DeepMind Control and Meta-World benchmarks with continual visual control tasks.
arXiv Detail & Related papers (2023-03-12T05:08:03Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Decomposed Linear Dynamical Systems (dLDS) for learning the latent
components of neural dynamics [6.829711787905569]
We propose a new decomposed dynamical system model that represents complex non-stationary and nonlinear dynamics of time series data.
Our model is trained through a dictionary learning procedure, where we leverage recent results in tracking sparse vectors over time.
In both continuous-time and discrete-time instructional examples we demonstrate that our model can well approximate the original system.
arXiv Detail & Related papers (2022-06-07T02:25:38Z) - Gradient-Based Trajectory Optimization With Learned Dynamics [80.41791191022139]
We use machine learning techniques to learn a differentiable dynamics model of the system from data.
We show that a neural network can model highly nonlinear behaviors accurately for large time horizons.
In our hardware experiments, we demonstrate that our learned model can represent complex dynamics for both the Spot and Radio-controlled (RC) car.
arXiv Detail & Related papers (2022-04-09T22:07:34Z) - Dream to Explore: Adaptive Simulations for Autonomous Systems [3.0664963196464448]
We tackle the problem of learning to control dynamical systems by applying Bayesian nonparametric methods.
By employing Gaussian processes to discover latent world dynamics, we mitigate common data efficiency issues observed in reinforcement learning.
Our algorithm jointly learns a world model and policy by optimizing a variational lower bound of a log-likelihood.
arXiv Detail & Related papers (2021-10-27T04:27:28Z) - LQResNet: A Deep Neural Network Architecture for Learning Dynamic
Processes [9.36739413306697]
A data-driven approach, namely operator inference framework, models a dynamic process.
We suggest combining the operator inference with certain deep neural network approaches to infer the unknown nonlinear dynamics of the system.
arXiv Detail & Related papers (2021-03-03T08:19:43Z) - Sample-efficient reinforcement learning using deep Gaussian processes [18.044018772331636]
Reinforcement learning provides a framework for learning to control which actions to take towards completing a task through trial-and-error.
In model-based reinforcement learning efficiency is improved by learning to simulate the world dynamics.
We introduce deep Gaussian processes where the depth of the compositions introduces model complexity while incorporating prior knowledge on the dynamics brings smoothness and structure.
arXiv Detail & Related papers (2020-11-02T13:37:57Z) - Localized active learning of Gaussian process state space models [63.97366815968177]
A globally accurate model is not required to achieve good performance in many common control applications.
We propose an active learning strategy for Gaussian process state space models that aims to obtain an accurate model on a bounded subset of the state-action space.
By employing model predictive control, the proposed technique integrates information collected during exploration and adaptively improves its exploration strategy.
arXiv Detail & Related papers (2020-05-04T05:35:02Z) - Learning Stable Deep Dynamics Models [91.90131512825504]
We propose an approach for learning dynamical systems that are guaranteed to be stable over the entire state space.
We show that such learning systems are able to model simple dynamical systems and can be combined with additional deep generative models to learn complex dynamics.
arXiv Detail & Related papers (2020-01-17T00:04:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.