Reconstructing shared dynamics with a deep neural network
- URL: http://arxiv.org/abs/2105.02322v2
- Date: Fri, 14 Oct 2022 13:48:00 GMT
- Title: Reconstructing shared dynamics with a deep neural network
- Authors: Zsigmond Benk\H{o}, Zolt\'an Somogyv\'ari
- Abstract summary: We present a method to identify hidden shared dynamics from time series by a two- module, feedforward neural network architecture.
The method has the potential to reveal hidden components of dynamical systems, where experimental intervention is not possible.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Determining hidden shared patterns behind dynamic phenomena can be a
game-changer in multiple areas of research. Here we present the principles and
show a method to identify hidden shared dynamics from time series by a
two-module, feedforward neural network architecture: the Mapper-Coach network.
We reconstruct unobserved, continuous latent variable input, the time series
generated by a chaotic logistic map, from the observed values of two
simultaneously forced chaotic logistic maps. The network has been trained to
predict one of the observed time series based on its own past and conditioned
on the other observed time series by error-back propagation. It was shown, that
after this prediction have been learned successfully, the activity of the
bottleneck neuron, connecting the mapper and the coach module, correlated
strongly with the latent shared input variable. The method has the potential to
reveal hidden components of dynamical systems, where experimental intervention
is not possible.
Related papers
- Joint trajectory and network inference via reference fitting [0.0]
We propose an approach for leveraging both dynamical and perturbational single cell data to jointly learn cellular trajectories and power network inference.
Our approach is motivated by min-entropy estimation for dynamics and can infer directed and signed networks from time-stamped single cell snapshots.
arXiv Detail & Related papers (2024-09-10T21:49:57Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Deep learning delay coordinate dynamics for chaotic attractors from
partial observable data [0.0]
We utilize deep artificial neural networks to learn discrete discrete time maps and continuous time flows of the partial state.
We demonstrate the capacity of deep ANNs to predict chaotic behavior from a scalar observation on a manifold of dimension three via the Lorenz system.
arXiv Detail & Related papers (2022-11-20T19:25:02Z) - Learning the Evolutionary and Multi-scale Graph Structure for
Multivariate Time Series Forecasting [50.901984244738806]
We show how to model the evolutionary and multi-scale interactions of time series.
In particular, we first provide a hierarchical graph structure cooperated with the dilated convolution to capture the scale-specific correlations.
A unified neural network is provided to integrate the components above to get the final prediction.
arXiv Detail & Related papers (2022-06-28T08:11:12Z) - Combining machine learning and data assimilation to forecast dynamical
systems from noisy partial observations [0.76146285961466]
We present a supervised learning method to learn the propagator map of a dynamical system from partial and noisy observations.
We show that the combination of random feature maps and data assimilation, called RAFDA, outperforms standard random feature maps for which the dynamics is learned using batch data.
arXiv Detail & Related papers (2021-08-08T03:38:36Z) - A Predictive Coding Account for Chaotic Itinerancy [68.8204255655161]
We show how a recurrent neural network implementing predictive coding can generate neural trajectories similar to chaotic itinerancy in the presence of input noise.
We propose two scenarios generating random and past-independent attractor switching trajectories using our model.
arXiv Detail & Related papers (2021-06-16T16:48:14Z) - Continuous-in-Depth Neural Networks [107.47887213490134]
We first show that ResNets fail to be meaningful dynamical in this richer sense.
We then demonstrate that neural network models can learn to represent continuous dynamical systems.
We introduce ContinuousNet as a continuous-in-depth generalization of ResNet architectures.
arXiv Detail & Related papers (2020-08-05T22:54:09Z) - On a Bernoulli Autoregression Framework for Link Discovery and
Prediction [1.9290392443571387]
We present a dynamic prediction framework for binary sequences that is based on a Bernoulli generalization of the auto-regressive process.
We propose a novel problem that exploits additional information via a much larger sequence of auxiliary networks.
In contrast to existing work our gradient based estimation approach is highly efficient and can scale to networks with millions of nodes.
arXiv Detail & Related papers (2020-07-23T05:58:22Z) - Predicting Temporal Sets with Deep Neural Networks [50.53727580527024]
We propose an integrated solution based on the deep neural networks for temporal sets prediction.
A unique perspective is to learn element relationship by constructing set-level co-occurrence graph.
We design an attention-based module to adaptively learn the temporal dependency of elements and sets.
arXiv Detail & Related papers (2020-06-20T03:29:02Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.