Reconstruction of neuromorphic dynamics from a single scalar time series using variational autoencoder and neural network map
- URL: http://arxiv.org/abs/2411.07055v1
- Date: Mon, 11 Nov 2024 15:15:55 GMT
- Title: Reconstruction of neuromorphic dynamics from a single scalar time series using variational autoencoder and neural network map
- Authors: Pavel V. Kuptsov, Nataliya V. Stankevich,
- Abstract summary: A model of a physiological neuron based on the Hodgkin-Huxley formalism is considered.
Single time series of one of its variables is shown to be enough to train a neural network that can operate as a discrete time dynamical system.
- Score: 0.0
- License:
- Abstract: This paper examines the reconstruction of a family of dynamical systems with neuromorphic behavior using a single scalar time series. A model of a physiological neuron based on the Hodgkin-Huxley formalism is considered. Single time series of one of its variables is shown to be enough to train a neural network that can operate as a discrete time dynamical system with one control parameter. The neural network system is created in two steps. First, the delay-coordinate embedding vectors are constructed form the original time series and their dimension is reduced with by means of a variational autoencoder to obtain the recovered state-space vectors. It is shown that an appropriate reduced dimension can be determined by analyzing the autoencoder training process. Second, pairs of the recovered state-space vectors at consecutive time steps supplied with a constant value playing the role of a control parameter are used to train another neural network to make it operate as a recurrent map. The regimes of thus created neural network system observed when its control parameter is varied are in very good accordance with those of the original system, though they were not explicitly presented during training.
Related papers
- Gradient-free training of recurrent neural networks [3.272216546040443]
We introduce a computational approach to construct all weights and biases of a recurrent neural network without using gradient-based methods.
The approach is based on a combination of random feature networks and Koopman operator theory for dynamical systems.
In computational experiments on time series, forecasting for chaotic dynamical systems, and control problems, we observe that the training time and forecasting accuracy of the recurrent neural networks we construct are improved.
arXiv Detail & Related papers (2024-10-30T21:24:34Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - Discovering dynamical features of Hodgkin-Huxley-type model of
physiological neuron using artificial neural network [0.0]
We consider Hodgkin-Huxley-type system with two fast and one slow variables.
For these two systems we create artificial neural networks that are able to reproduce their dynamics.
For the bistable model it means that the network being trained only on one brunch of the solutions recovers another without seeing it during the training.
arXiv Detail & Related papers (2022-03-26T19:04:19Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - On the reproducibility of fully convolutional neural networks for
modeling time-space evolving physical systems [0.0]
Deep-learning fully convolutional neural network is evaluated by training several times the same network on identical conditions.
Trainings performed with double floating-point precision provide slightly better estimations and a significant reduction of the variability of both the network parameters and its testing error range.
arXiv Detail & Related papers (2021-05-12T07:39:30Z) - Model Order Reduction based on Runge-Kutta Neural Network [0.0]
In this work, we apply some modifications for both steps respectively and investigate how they are impacted by testing with three simulation models.
For the model reconstruction step, two types of neural network architectures are compared: Multilayer Perceptron (MLP) and Runge-Kutta Neural Network (RKNN)
arXiv Detail & Related papers (2021-03-25T13:02:16Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z) - Variational inference formulation for a model-free simulation of a
dynamical system with unknown parameters by a recurrent neural network [8.616180927172548]
We propose a "model-free" simulation of a dynamical system with unknown parameters without prior knowledge.
The deep learning model aims to jointly learn the nonlinear time marching operator and the effects of the unknown parameters from a time series dataset.
It is found that the proposed deep learning model is capable of correctly identifying the dimensions of the random parameters and learning a representation of complex time series data.
arXiv Detail & Related papers (2020-03-02T20:57:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.