Trans-Bifurcation Prediction of Dynamics in terms of Extreme Learning Machines with Control Inputs
- URL: http://arxiv.org/abs/2410.13289v1
- Date: Thu, 17 Oct 2024 07:34:23 GMT
- Title: Trans-Bifurcation Prediction of Dynamics in terms of Extreme Learning Machines with Control Inputs
- Authors: Satoru Tadokoro, Akihiro Yamaguchi, Takao Namiki, Ichiro Tsuda,
- Abstract summary: We show that the entire structure of the bifurcations of a target one- parameter family of dynamical systems can be nearly reproduced by training on transient dynamics using only a few parameter values.
We propose a mechanism to explain this remarkable learning ability and discuss the relationship between the present results and similar results obtained by Kim et al.
- Score: 0.49998148477760973
- License:
- Abstract: By extending the extreme learning machine by additional control inputs, we achieved almost complete reproduction of bifurcation structures of dynamical systems. The learning ability of the proposed neural network system is striking in that the entire structure of the bifurcations of a target one-parameter family of dynamical systems can be nearly reproduced by training on transient dynamics using only a few parameter values. Moreover, we propose a mechanism to explain this remarkable learning ability and discuss the relationship between the present results and similar results obtained by Kim et al.
Related papers
- Meta-Dynamical State Space Models for Integrative Neural Data Analysis [8.625491800829224]
Learning shared structure across environments facilitates rapid learning and adaptive behavior in neural systems.
There has been limited work exploiting the shared structure in neural activity during similar tasks for learning latent dynamics from neural recordings.
We propose a novel approach for meta-learning this solution space from task-related neural activity of trained animals.
arXiv Detail & Related papers (2024-10-07T19:35:49Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Learning System Dynamics without Forgetting [60.08612207170659]
Predicting trajectories of systems with unknown dynamics is crucial in various research fields, including physics and biology.
We present a novel framework of Mode-switching Graph ODE (MS-GODE), which can continually learn varying dynamics.
We construct a novel benchmark of biological dynamic systems, featuring diverse systems with disparate dynamics.
arXiv Detail & Related papers (2024-06-30T14:55:18Z) - A Waddington landscape for prototype learning in generalized Hopfield
networks [0.0]
We study the learning dynamics of Generalized Hopfield networks.
We observe a strong resemblance to the canalized, or low-dimensional, dynamics of cells as they differentiate.
arXiv Detail & Related papers (2023-12-04T21:28:14Z) - Mechanism of feature learning in deep fully connected networks and
kernel machines that recursively learn features [15.29093374895364]
We identify and characterize the mechanism through which deep fully connected neural networks learn gradient features.
Our ansatz sheds light on various deep learning phenomena including emergence of spurious features and simplicity biases.
To demonstrate the effectiveness of this feature learning mechanism, we use it to enable feature learning in classical, non-feature learning models.
arXiv Detail & Related papers (2022-12-28T15:50:58Z) - Physics-Inspired Temporal Learning of Quadrotor Dynamics for Accurate
Model Predictive Trajectory Tracking [76.27433308688592]
Accurately modeling quadrotor's system dynamics is critical for guaranteeing agile, safe, and stable navigation.
We present a novel Physics-Inspired Temporal Convolutional Network (PI-TCN) approach to learning quadrotor's system dynamics purely from robot experience.
Our approach combines the expressive power of sparse temporal convolutions and dense feed-forward connections to make accurate system predictions.
arXiv Detail & Related papers (2022-06-07T13:51:35Z) - Decomposed Linear Dynamical Systems (dLDS) for learning the latent
components of neural dynamics [6.829711787905569]
We propose a new decomposed dynamical system model that represents complex non-stationary and nonlinear dynamics of time series data.
Our model is trained through a dictionary learning procedure, where we leverage recent results in tracking sparse vectors over time.
In both continuous-time and discrete-time instructional examples we demonstrate that our model can well approximate the original system.
arXiv Detail & Related papers (2022-06-07T02:25:38Z) - Capturing Actionable Dynamics with Structured Latent Ordinary
Differential Equations [68.62843292346813]
We propose a structured latent ODE model that captures system input variations within its latent representation.
Building on a static variable specification, our model learns factors of variation for each input to the system, thus separating the effects of the system inputs in the latent space.
arXiv Detail & Related papers (2022-02-25T20:00:56Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Deep learning of contagion dynamics on complex networks [0.0]
We propose a complementary approach based on deep learning to build effective models of contagion dynamics on networks.
By allowing simulations on arbitrary network structures, our approach makes it possible to explore the properties of the learned dynamics beyond the training data.
Our results demonstrate how deep learning offers a new and complementary perspective to build effective models of contagion dynamics on networks.
arXiv Detail & Related papers (2020-06-09T17:18:34Z) - Learning Stable Deep Dynamics Models [91.90131512825504]
We propose an approach for learning dynamical systems that are guaranteed to be stable over the entire state space.
We show that such learning systems are able to model simple dynamical systems and can be combined with additional deep generative models to learn complex dynamics.
arXiv Detail & Related papers (2020-01-17T00:04:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.