Learning Dynamics Models with Stable Invariant Sets
- URL: http://arxiv.org/abs/2006.08935v2
- Date: Thu, 29 Oct 2020 20:33:53 GMT
- Title: Learning Dynamics Models with Stable Invariant Sets
- Authors: Naoya Takeishi and Yoshinobu Kawahara
- Abstract summary: We propose a method to ensure that a dynamics model has a stable invariant set of general classes.
We compute the projection easily, and at the same time, we can maintain the model's flexibility using various invertible neural networks.
- Score: 17.63040340961143
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Invariance and stability are essential notions in dynamical systems study,
and thus it is of great interest to learn a dynamics model with a stable
invariant set. However, existing methods can only handle the stability of an
equilibrium. In this paper, we propose a method to ensure that a dynamics model
has a stable invariant set of general classes such as limit cycles and line
attractors. We start with the approach by Manek and Kolter (2019), where they
use a learnable Lyapunov function to make a model stable with regard to an
equilibrium. We generalize it for general sets by introducing projection onto
them. To resolve the difficulty of specifying a to-be stable invariant set
analytically, we propose defining such a set as a primitive shape (e.g.,
sphere) in a latent space and learning the transformation between the original
and latent spaces. It enables us to compute the projection easily, and at the
same time, we can maintain the model's flexibility using various invertible
neural networks for the transformation. We present experimental results that
show the validity of the proposed method and the usefulness for long-term
prediction.
Related papers
- Latent Space Energy-based Neural ODEs [73.01344439786524]
This paper introduces a novel family of deep dynamical models designed to represent continuous-time sequence data.
We train the model using maximum likelihood estimation with Markov chain Monte Carlo.
Experiments on oscillating systems, videos and real-world state sequences (MuJoCo) illustrate that ODEs with the learnable energy-based prior outperform existing counterparts.
arXiv Detail & Related papers (2024-09-05T18:14:22Z) - Mamba-FSCIL: Dynamic Adaptation with Selective State Space Model for Few-Shot Class-Incremental Learning [113.89327264634984]
Few-shot class-incremental learning (FSCIL) confronts the challenge of integrating new classes into a model with minimal training samples.
Traditional methods widely adopt static adaptation relying on a fixed parameter space to learn from data that arrive sequentially.
We propose a dual selective SSM projector that dynamically adjusts the projection parameters based on the intermediate features for dynamic adaptation.
arXiv Detail & Related papers (2024-07-08T17:09:39Z) - Numerically Stable Sparse Gaussian Processes via Minimum Separation
using Cover Trees [57.67528738886731]
We study the numerical stability of scalable sparse approximations based on inducing points.
For low-dimensional tasks such as geospatial modeling, we propose an automated method for computing inducing points satisfying these conditions.
arXiv Detail & Related papers (2022-10-14T15:20:17Z) - Learning and Inference in Sparse Coding Models with Langevin Dynamics [3.0600309122672726]
We describe a system capable of inference and learning in a probabilistic latent variable model.
We demonstrate this idea for a sparse coding model by deriving a continuous-time equation for inferring its latent variables via Langevin dynamics.
We show that Langevin dynamics lead to an efficient procedure for sampling from the posterior distribution in the 'L0 sparse' regime, where latent variables are encouraged to be set to zero as opposed to having a small L1 norm.
arXiv Detail & Related papers (2022-04-23T23:16:47Z) - Stability Preserving Data-driven Models With Latent Dynamics [0.0]
We introduce a data-driven modeling approach for dynamics problems with latent variables.
We present a model framework where the stability of the coupled dynamics can be easily enforced.
arXiv Detail & Related papers (2022-04-20T00:41:10Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Learning Stable Koopman Embeddings [9.239657838690228]
We present a new data-driven method for learning stable models of nonlinear systems.
We prove that every discrete-time nonlinear contracting model can be learnt in our framework.
arXiv Detail & Related papers (2021-10-13T05:44:13Z) - Topographic VAEs learn Equivariant Capsules [84.33745072274942]
We introduce the Topographic VAE: a novel method for efficiently training deep generative models with topographically organized latent variables.
We show that such a model indeed learns to organize its activations according to salient characteristics such as digit class, width, and style on MNIST.
We demonstrate approximate equivariance to complex transformations, expanding upon the capabilities of existing group equivariant neural networks.
arXiv Detail & Related papers (2021-09-03T09:25:57Z) - Almost Surely Stable Deep Dynamics [4.199844472131922]
We introduce a method for learning provably stable deep neural network based dynamic models from observed data.
Our method works by embedding a Lyapunov neural network into the dynamic model, thereby inherently satisfying the stability criterion.
arXiv Detail & Related papers (2021-03-26T20:37:08Z) - Training Generative Adversarial Networks by Solving Ordinary
Differential Equations [54.23691425062034]
We study the continuous-time dynamics induced by GAN training.
From this perspective, we hypothesise that instabilities in training GANs arise from the integration error.
We experimentally verify that well-known ODE solvers (such as Runge-Kutta) can stabilise training.
arXiv Detail & Related papers (2020-10-28T15:23:49Z) - ImitationFlow: Learning Deep Stable Stochastic Dynamic Systems by
Normalizing Flows [29.310742141970394]
We introduce ImitationFlow, a novel Deep generative model that allows learning complex globally stable, nonlinear dynamics.
We show the effectiveness of our method with both standard datasets and a real robot experiment.
arXiv Detail & Related papers (2020-10-25T14:49:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.