ImitationFlow: Learning Deep Stable Stochastic Dynamic Systems by
Normalizing Flows
- URL: http://arxiv.org/abs/2010.13129v1
- Date: Sun, 25 Oct 2020 14:49:46 GMT
- Title: ImitationFlow: Learning Deep Stable Stochastic Dynamic Systems by
Normalizing Flows
- Authors: Julen Urain, Michelle Ginesi, Davide Tateo, Jan Peters
- Abstract summary: We introduce ImitationFlow, a novel Deep generative model that allows learning complex globally stable, nonlinear dynamics.
We show the effectiveness of our method with both standard datasets and a real robot experiment.
- Score: 29.310742141970394
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce ImitationFlow, a novel Deep generative model that allows
learning complex globally stable, stochastic, nonlinear dynamics. Our approach
extends the Normalizing Flows framework to learn stable Stochastic Differential
Equations. We prove the Lyapunov stability for a class of Stochastic
Differential Equations and we propose a learning algorithm to learn them from a
set of demonstrated trajectories. Our model extends the set of stable dynamical
systems that can be represented by state-of-the-art approaches, eliminates the
Gaussian assumption on the demonstrations, and outperforms the previous
algorithms in terms of representation accuracy. We show the effectiveness of
our method with both standard datasets and a real robot experiment.
Related papers
- Learning Controlled Stochastic Differential Equations [61.82896036131116]
This work proposes a novel method for estimating both drift and diffusion coefficients of continuous, multidimensional, nonlinear controlled differential equations with non-uniform diffusion.
We provide strong theoretical guarantees, including finite-sample bounds for (L2), (Linfty), and risk metrics, with learning rates adaptive to coefficients' regularity.
Our method is available as an open-source Python library.
arXiv Detail & Related papers (2024-11-04T11:09:58Z) - HyperSINDy: Deep Generative Modeling of Nonlinear Stochastic Governing
Equations [5.279268784803583]
We introduce HyperSINDy, a framework for modeling dynamics via a deep generative model of sparse governing equations from data.
Once trained, HyperSINDy generates dynamics via a differential equation whose coefficients are driven by a white noise.
In experiments, HyperSINDy recovers ground truth governing equations, with learnedity scaling to match that of the data.
arXiv Detail & Related papers (2023-10-07T14:41:59Z) - Distributionally Robust Model-based Reinforcement Learning with Large
State Spaces [55.14361269378122]
Three major challenges in reinforcement learning are the complex dynamical systems with large state spaces, the costly data acquisition processes, and the deviation of real-world dynamics from the training environment deployment.
We study distributionally robust Markov decision processes with continuous state spaces under the widely used Kullback-Leibler, chi-square, and total variation uncertainty sets.
We propose a model-based approach that utilizes Gaussian Processes and the maximum variance reduction algorithm to efficiently learn multi-output nominal transition dynamics.
arXiv Detail & Related papers (2023-09-05T13:42:11Z) - Learning minimal representations of stochastic processes with
variational autoencoders [52.99137594502433]
We introduce an unsupervised machine learning approach to determine the minimal set of parameters required to describe a process.
Our approach enables for the autonomous discovery of unknown parameters describing processes.
arXiv Detail & Related papers (2023-07-21T14:25:06Z) - A Geometric Perspective on Diffusion Models [57.27857591493788]
We inspect the ODE-based sampling of a popular variance-exploding SDE.
We establish a theoretical relationship between the optimal ODE-based sampling and the classic mean-shift (mode-seeking) algorithm.
arXiv Detail & Related papers (2023-05-31T15:33:16Z) - Reservoir Computing with Error Correction: Long-term Behaviors of
Stochastic Dynamical Systems [5.815325960286111]
We propose a data-driven framework combining Reservoir Computing and Normalizing Flow to study this issue.
We verify the effectiveness of the proposed framework in several experiments, including the Van der Pal, El Nino-Southern Oscillation simplified model, and Lorenz system.
arXiv Detail & Related papers (2023-05-01T05:50:17Z) - SDYN-GANs: Adversarial Learning Methods for Multistep Generative Models
for General Order Stochastic Dynamics [20.292913470013744]
We build on Generative Adversarial Networks (GANs) with generative model classes based on stable $m$-step numerical trajectory.
We show how our approaches can be used for modeling physical systems to learn force-laws, damping coefficients, and noise-related parameters.
arXiv Detail & Related papers (2023-02-07T18:28:09Z) - Guaranteed Conservation of Momentum for Learning Particle-based Fluid
Dynamics [96.9177297872723]
We present a novel method for guaranteeing linear momentum in learned physics simulations.
We enforce conservation of momentum with a hard constraint, which we realize via antisymmetrical continuous convolutional layers.
In combination, the proposed method allows us to increase the physical accuracy of the learned simulator substantially.
arXiv Detail & Related papers (2022-10-12T09:12:59Z) - Learning and Inference in Sparse Coding Models with Langevin Dynamics [3.0600309122672726]
We describe a system capable of inference and learning in a probabilistic latent variable model.
We demonstrate this idea for a sparse coding model by deriving a continuous-time equation for inferring its latent variables via Langevin dynamics.
We show that Langevin dynamics lead to an efficient procedure for sampling from the posterior distribution in the 'L0 sparse' regime, where latent variables are encouraged to be set to zero as opposed to having a small L1 norm.
arXiv Detail & Related papers (2022-04-23T23:16:47Z) - Learning Unstable Dynamics with One Minute of Data: A
Differentiation-based Gaussian Process Approach [47.045588297201434]
We show how to exploit the differentiability of Gaussian processes to create a state-dependent linearized approximation of the true continuous dynamics.
We validate our approach by iteratively learning the system dynamics of an unstable system such as a 9-D segway.
arXiv Detail & Related papers (2021-03-08T05:08:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.