Continual Learning of Dynamical Systems with Competitive Federated
Reservoir Computing
- URL: http://arxiv.org/abs/2206.13336v1
- Date: Mon, 27 Jun 2022 14:35:50 GMT
- Title: Continual Learning of Dynamical Systems with Competitive Federated
Reservoir Computing
- Authors: Leonard Bereska and Efstratios Gavves
- Abstract summary: Continual learning aims to rapidly adapt to abrupt system changes without previous dynamical regimes.
This work proposes an approach to continual learning based reservoir computing.
We show that this multi-head reservoir minimizes interference and forgetting on several dynamical systems.
- Score: 29.98127520773633
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning recently proved efficient in learning differential equations
and dynamical systems from data. However, the data is commonly assumed to
originate from a single never-changing system. In contrast, when modeling
real-world dynamical processes, the data distribution often shifts due to
changes in the underlying system dynamics. Continual learning of these
processes aims to rapidly adapt to abrupt system changes without forgetting
previous dynamical regimes. This work proposes an approach to continual
learning based on reservoir computing, a state-of-the-art method for training
recurrent neural networks on complex spatiotemporal dynamical systems.
Reservoir computing fixes the recurrent network weights - hence these cannot be
forgotten - and only updates linear projection heads to the output. We propose
to train multiple competitive prediction heads concurrently. Inspired by
neuroscience's predictive coding, only the most predictive heads activate,
laterally inhibiting and thus protecting the inactive heads from forgetting
induced by interfering parameter updates. We show that this multi-head
reservoir minimizes interference and catastrophic forgetting on several
dynamical systems, including the Van-der-Pol oscillator, the chaotic Lorenz
attractor, and the high-dimensional Lorenz-96 weather model. Our results
suggest that reservoir computing is a promising candidate framework for the
continual learning of dynamical systems. We provide our code for data
generation, method, and comparisons at
\url{https://github.com/leonardbereska/multiheadreservoir}.
Related papers
- Learning System Dynamics without Forgetting [60.08612207170659]
Predicting trajectories of systems with unknown dynamics is crucial in various research fields, including physics and biology.
We present a novel framework of Mode-switching Graph ODE (MS-GODE), which can continually learn varying dynamics.
We construct a novel benchmark of biological dynamic systems, featuring diverse systems with disparate dynamics.
arXiv Detail & Related papers (2024-06-30T14:55:18Z) - Divide And Conquer: Learning Chaotic Dynamical Systems With Multistep Penalty Neural Ordinary Differential Equations [0.0]
Multistep Penalty NODE is applied to chaotic systems such as the Kuramoto-Sivash Kolinsky equation, the two-dimensional Kolmogorov flow, and ERA5 reanalysis data for the atmosphere.
It is observed that MPODE provide viable performance for such chaotic systems with significantly lower computational costs.
arXiv Detail & Related papers (2024-06-30T02:50:28Z) - Controlling dynamical systems to complex target states using machine
learning: next-generation vs. classical reservoir computing [68.8204255655161]
Controlling nonlinear dynamical systems using machine learning allows to drive systems into simple behavior like periodicity but also to more complex arbitrary dynamics.
We show first that classical reservoir computing excels at this task.
In a next step, we compare those results based on different amounts of training data to an alternative setup, where next-generation reservoir computing is used instead.
It turns out that while delivering comparable performance for usual amounts of training data, next-generation RC significantly outperforms in situations where only very limited data is available.
arXiv Detail & Related papers (2023-07-14T07:05:17Z) - Brain-Inspired Spiking Neural Network for Online Unsupervised Time
Series Prediction [13.521272923545409]
We present a novel Continuous Learning-based Unsupervised Recurrent Spiking Neural Network Model (CLURSNN)
CLURSNN makes online predictions by reconstructing the underlying dynamical system using Random Delay Embedding.
We show that the proposed online time series prediction methodology outperforms state-of-the-art DNN models when predicting an evolving Lorenz63 dynamical system.
arXiv Detail & Related papers (2023-04-10T16:18:37Z) - Learning unseen coexisting attractors [0.0]
Reservoir computing is a machine learning approach that can generate a surrogate model of a dynamical system.
Here, we study a challenging problem of learning a dynamical system that has both disparate time scales and multiple co-existing dynamical states (attractors)
We show that the next-generation reservoir computing approach uses $sim 1.7 times$ less training data, requires $103 times$ shorter warm up' time, and has an $sim 100times$ higher accuracy in predicting the co-existing attractor characteristics.
arXiv Detail & Related papers (2022-07-28T14:55:14Z) - Physics-Inspired Temporal Learning of Quadrotor Dynamics for Accurate
Model Predictive Trajectory Tracking [76.27433308688592]
Accurately modeling quadrotor's system dynamics is critical for guaranteeing agile, safe, and stable navigation.
We present a novel Physics-Inspired Temporal Convolutional Network (PI-TCN) approach to learning quadrotor's system dynamics purely from robot experience.
Our approach combines the expressive power of sparse temporal convolutions and dense feed-forward connections to make accurate system predictions.
arXiv Detail & Related papers (2022-06-07T13:51:35Z) - Decomposed Linear Dynamical Systems (dLDS) for learning the latent
components of neural dynamics [6.829711787905569]
We propose a new decomposed dynamical system model that represents complex non-stationary and nonlinear dynamics of time series data.
Our model is trained through a dictionary learning procedure, where we leverage recent results in tracking sparse vectors over time.
In both continuous-time and discrete-time instructional examples we demonstrate that our model can well approximate the original system.
arXiv Detail & Related papers (2022-06-07T02:25:38Z) - Supervised DKRC with Images for Offline System Identification [77.34726150561087]
Modern dynamical systems are becoming increasingly non-linear and complex.
There is a need for a framework to model these systems in a compact and comprehensive representation for prediction and control.
Our approach learns these basis functions using a supervised learning approach.
arXiv Detail & Related papers (2021-09-06T04:39:06Z) - Using Data Assimilation to Train a Hybrid Forecast System that Combines
Machine-Learning and Knowledge-Based Components [52.77024349608834]
We consider the problem of data-assisted forecasting of chaotic dynamical systems when the available data is noisy partial measurements.
We show that by using partial measurements of the state of the dynamical system, we can train a machine learning model to improve predictions made by an imperfect knowledge-based model.
arXiv Detail & Related papers (2021-02-15T19:56:48Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.