Learning unseen coexisting attractors
- URL: http://arxiv.org/abs/2207.14133v1
- Date: Thu, 28 Jul 2022 14:55:14 GMT
- Title: Learning unseen coexisting attractors
- Authors: Daniel J. Gauthier, Ingo Fischer, Andr\'e R\"ohm
- Abstract summary: Reservoir computing is a machine learning approach that can generate a surrogate model of a dynamical system.
Here, we study a challenging problem of learning a dynamical system that has both disparate time scales and multiple co-existing dynamical states (attractors)
We show that the next-generation reservoir computing approach uses $sim 1.7 times$ less training data, requires $103 times$ shorter warm up' time, and has an $sim 100times$ higher accuracy in predicting the co-existing attractor characteristics.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reservoir computing is a machine learning approach that can generate a
surrogate model of a dynamical system. It can learn the underlying dynamical
system using fewer trainable parameters and hence smaller training data sets
than competing approaches. Recently, a simpler formulation, known as
next-generation reservoir computing, removes many algorithm metaparameters and
identifies a well-performing traditional reservoir computer, thus simplifying
training even further. Here, we study a particularly challenging problem of
learning a dynamical system that has both disparate time scales and multiple
co-existing dynamical states (attractors). We compare the next-generation and
traditional reservoir computer using metrics quantifying the geometry of the
ground-truth and forecasted attractors. For the studied four-dimensional
system, the next-generation reservoir computing approach uses $\sim 1.7 \times$
less training data, requires $10^3 \times$ shorter `warm up' time, has fewer
metaparameters, and has an $\sim 100\times$ higher accuracy in predicting the
co-existing attractor characteristics in comparison to a traditional reservoir
computer. Furthermore, we demonstrate that it predicts the basin of attraction
with high accuracy. This work lends further support to the superior learning
ability of this new machine learning algorithm for dynamical systems.
Related papers
- Controlling dynamical systems to complex target states using machine
learning: next-generation vs. classical reservoir computing [68.8204255655161]
Controlling nonlinear dynamical systems using machine learning allows to drive systems into simple behavior like periodicity but also to more complex arbitrary dynamics.
We show first that classical reservoir computing excels at this task.
In a next step, we compare those results based on different amounts of training data to an alternative setup, where next-generation reservoir computing is used instead.
It turns out that while delivering comparable performance for usual amounts of training data, next-generation RC significantly outperforms in situations where only very limited data is available.
arXiv Detail & Related papers (2023-07-14T07:05:17Z) - Tensor Decompositions Meet Control Theory: Learning General Mixtures of
Linear Dynamical Systems [19.47235707806519]
We give a new approach to learning mixtures of linear dynamical systems based on tensor decompositions.
Our algorithm succeeds without strong separation conditions on the components, and can be used to compete with the Bayes optimal clustering of the trajectories.
arXiv Detail & Related papers (2023-07-13T03:00:01Z) - Hindsight States: Blending Sim and Real Task Elements for Efficient
Reinforcement Learning [61.3506230781327]
In robotics, one approach to generate training data builds on simulations based on dynamics models derived from first principles.
Here, we leverage the imbalance in complexity of the dynamics to learn more sample-efficiently.
We validate our method on several challenging simulated tasks and demonstrate that it improves learning both alone and when combined with an existing hindsight algorithm.
arXiv Detail & Related papers (2023-03-03T21:55:04Z) - Continual Learning of Dynamical Systems with Competitive Federated
Reservoir Computing [29.98127520773633]
Continual learning aims to rapidly adapt to abrupt system changes without previous dynamical regimes.
This work proposes an approach to continual learning based reservoir computing.
We show that this multi-head reservoir minimizes interference and forgetting on several dynamical systems.
arXiv Detail & Related papers (2022-06-27T14:35:50Z) - Learning Spatiotemporal Chaos Using Next-Generation Reservoir Computing [0.0]
We show that an ML architecture combined with a next-generation chaos reservoir computer displays state-of-the-art performance with a training time $103-10$4 times faster.
We also take advantage of the translational symmetry of the model to further reduce the computational cost and training data, each by a factor of $sim$10.
arXiv Detail & Related papers (2022-03-24T18:42:12Z) - Powerpropagation: A sparsity inducing weight reparameterisation [65.85142037667065]
We introduce Powerpropagation, a new weight- parameterisation for neural networks that leads to inherently sparse models.
Models trained in this manner exhibit similar performance, but have a distribution with markedly higher density at zero, allowing more parameters to be pruned safely.
Here, we combine Powerpropagation with a traditional weight-pruning technique as well as recent state-of-the-art sparse-to-sparse algorithms, showing superior performance on the ImageNet benchmark.
arXiv Detail & Related papers (2021-10-01T10:03:57Z) - Using Data Assimilation to Train a Hybrid Forecast System that Combines
Machine-Learning and Knowledge-Based Components [52.77024349608834]
We consider the problem of data-assisted forecasting of chaotic dynamical systems when the available data is noisy partial measurements.
We show that by using partial measurements of the state of the dynamical system, we can train a machine learning model to improve predictions made by an imperfect knowledge-based model.
arXiv Detail & Related papers (2021-02-15T19:56:48Z) - Data-Efficient Learning for Complex and Real-Time Physical Problem
Solving using Augmented Simulation [49.631034790080406]
We present a task for navigating a marble to the center of a circular maze.
We present a model that learns to move a marble in the complex environment within minutes of interacting with the real system.
arXiv Detail & Related papers (2020-11-14T02:03:08Z) - Fast Modeling and Understanding Fluid Dynamics Systems with
Encoder-Decoder Networks [0.0]
We show that an accurate deep-learning-based proxy model can be taught efficiently by a finite-volume-based simulator.
Compared to traditional simulation, the proposed deep learning approach enables much faster forward computation.
We quantify the sensitivity of the deep learning model to key physical parameters and hence demonstrate that the inversion problems can be solved with great acceleration.
arXiv Detail & Related papers (2020-06-09T17:14:08Z) - Combining Machine Learning with Knowledge-Based Modeling for Scalable
Forecasting and Subgrid-Scale Closure of Large, Complex, Spatiotemporal
Systems [48.7576911714538]
We attempt to utilize machine learning as the essential tool for integrating pasttemporal data into predictions.
We propose combining two approaches: (i) a parallel machine learning prediction scheme; and (ii) a hybrid technique, for a composite prediction system composed of a knowledge-based component and a machine-learning-based component.
We demonstrate that not only can this method combining (i) and (ii) be scaled to give excellent performance for very large systems, but also that the length of time series data needed to train our multiple, parallel machine learning components is dramatically less than that necessary without parallelization.
arXiv Detail & Related papers (2020-02-10T23:21:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.