Learning Unstable Dynamics with One Minute of Data: A
Differentiation-based Gaussian Process Approach
- URL: http://arxiv.org/abs/2103.04548v1
- Date: Mon, 8 Mar 2021 05:08:47 GMT
- Title: Learning Unstable Dynamics with One Minute of Data: A
Differentiation-based Gaussian Process Approach
- Authors: Ivan D. Jimenez Rodriguez, Ugo Rosolia, Aaron D. Ames, Yisong Yue
- Abstract summary: We show how to exploit the differentiability of Gaussian processes to create a state-dependent linearized approximation of the true continuous dynamics.
We validate our approach by iteratively learning the system dynamics of an unstable system such as a 9-D segway.
- Score: 47.045588297201434
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a straightforward and efficient way to estimate dynamics models
for unstable robotic systems. Specifically, we show how to exploit the
differentiability of Gaussian processes to create a state-dependent linearized
approximation of the true continuous dynamics. Our approach is compatible with
most Gaussian process approaches for system identification, and can learn an
accurate model using modest amounts of training data. We validate our approach
by iteratively learning the system dynamics of an unstable system such as a 9-D
segway (using only one minute of data) and we show that the resulting
controller is robust to unmodelled dynamics and disturbances, while
state-of-the-art control methods based on nominal models can fail under small
perturbations.
Related papers
- Data-driven Effective Modeling of Multiscale Stochastic Dynamical Systems [4.357350642401934]
We present a numerical method for learning the dynamics of slow components of unknown multiscale dynamical systems.
By utilizing the observation data, our proposed method is capable of constructing a generative model that can accurately capture the effective dynamics of the slow variables in distribution.
arXiv Detail & Related papers (2024-08-27T07:03:51Z) - Towards Learning Stochastic Population Models by Gradient Descent [0.0]
We show that simultaneous estimation of parameters and structure poses major challenges for optimization procedures.
We demonstrate accurate estimation of models but find that enforcing the inference of parsimonious, interpretable models drastically increases the difficulty.
arXiv Detail & Related papers (2024-04-10T14:38:58Z) - Learning minimal representations of stochastic processes with
variational autoencoders [52.99137594502433]
We introduce an unsupervised machine learning approach to determine the minimal set of parameters required to describe a process.
Our approach enables for the autonomous discovery of unknown parameters describing processes.
arXiv Detail & Related papers (2023-07-21T14:25:06Z) - Dynamic Bayesian Learning and Calibration of Spatiotemporal Mechanistic
System [0.0]
We develop an approach for fully learning and calibration of mechanistic models based on noisy observations.
We demonstrate this flexibility through solving problems arising in the analysis of ordinary and partial nonlinear differential equations.
arXiv Detail & Related papers (2022-08-12T23:17:46Z) - Gradient-Based Trajectory Optimization With Learned Dynamics [80.41791191022139]
We use machine learning techniques to learn a differentiable dynamics model of the system from data.
We show that a neural network can model highly nonlinear behaviors accurately for large time horizons.
In our hardware experiments, we demonstrate that our learned model can represent complex dynamics for both the Spot and Radio-controlled (RC) car.
arXiv Detail & Related papers (2022-04-09T22:07:34Z) - Learning continuous models for continuous physics [94.42705784823997]
We develop a test based on numerical analysis theory to validate machine learning models for science and engineering applications.
Our results illustrate how principled numerical analysis methods can be coupled with existing ML training/testing methodologies to validate models for science and engineering applications.
arXiv Detail & Related papers (2022-02-17T07:56:46Z) - Using scientific machine learning for experimental bifurcation analysis
of dynamic systems [2.204918347869259]
This study focuses on training universal differential equation (UDE) models for physical nonlinear dynamical systems with limit cycles.
We consider examples where training data is generated by numerical simulations, whereas we also employ the proposed modelling concept to physical experiments.
We use both neural networks and Gaussian processes as universal approximators alongside the mechanistic models to give a critical assessment of the accuracy and robustness of the UDE modelling approach.
arXiv Detail & Related papers (2021-10-22T15:43:03Z) - Using Data Assimilation to Train a Hybrid Forecast System that Combines
Machine-Learning and Knowledge-Based Components [52.77024349608834]
We consider the problem of data-assisted forecasting of chaotic dynamical systems when the available data is noisy partial measurements.
We show that by using partial measurements of the state of the dynamical system, we can train a machine learning model to improve predictions made by an imperfect knowledge-based model.
arXiv Detail & Related papers (2021-02-15T19:56:48Z) - Training Generative Adversarial Networks by Solving Ordinary
Differential Equations [54.23691425062034]
We study the continuous-time dynamics induced by GAN training.
From this perspective, we hypothesise that instabilities in training GANs arise from the integration error.
We experimentally verify that well-known ODE solvers (such as Runge-Kutta) can stabilise training.
arXiv Detail & Related papers (2020-10-28T15:23:49Z) - ImitationFlow: Learning Deep Stable Stochastic Dynamic Systems by
Normalizing Flows [29.310742141970394]
We introduce ImitationFlow, a novel Deep generative model that allows learning complex globally stable, nonlinear dynamics.
We show the effectiveness of our method with both standard datasets and a real robot experiment.
arXiv Detail & Related papers (2020-10-25T14:49:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.