Spatio-Temporal Activation Function To Map Complex Dynamical Systems
- URL: http://arxiv.org/abs/2009.08931v1
- Date: Sun, 6 Sep 2020 23:08:25 GMT
- Title: Spatio-Temporal Activation Function To Map Complex Dynamical Systems
- Authors: Parth Mahendra
- Abstract summary: Reservoir computing, which is a subset of recurrent neural networks, is actively used to simulate complex dynamical systems.
The inclusion of a temporal term alters the fundamental nature of an activation function, it provides capability to capture the complex dynamics of time series data.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most of the real world is governed by complex and chaotic dynamical systems.
All of these dynamical systems pose a challenge in modelling them using neural
networks. Currently, reservoir computing, which is a subset of recurrent neural
networks, is actively used to simulate complex dynamical systems. In this work,
a two dimensional activation function is proposed which includes an additional
temporal term to impart dynamic behaviour on its output. The inclusion of a
temporal term alters the fundamental nature of an activation function, it
provides capability to capture the complex dynamics of time series data without
relying on recurrent neural networks.
Related papers
- Neural Symbolic Regression of Complex Network Dynamics [28.356824329954495]
We propose Physically Inspired Neural Dynamics Regression (PI-NDSR) to automatically learn the symbolic expression of dynamics.
We evaluate our method on synthetic datasets generated by various dynamics and real datasets on disease spreading.
arXiv Detail & Related papers (2024-10-15T02:02:30Z) - Explicit construction of recurrent neural networks effectively approximating discrete dynamical systems [0.0]
We consider arbitrary bounded discrete time series originating from dynamical system with recursivity.
We provide an explicit construction of recurrent neural networks which effectively approximate the corresponding discrete dynamical systems.
arXiv Detail & Related papers (2024-09-28T07:59:45Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - LINOCS: Lookahead Inference of Networked Operators for Continuous Stability [4.508868068781057]
We introduce Lookahead-driven Inference of Networked Operators for Continuous Stability (LINOCS)
LINOCS is a robust learning procedure for identifying hidden dynamical interactions in noisy time-series data.
We demonstrate LINOCS' ability to recover the ground truth dynamical operators underlying synthetic time-series data.
arXiv Detail & Related papers (2024-04-28T18:16:58Z) - On the effectiveness of neural priors in modeling dynamical systems [28.69155113611877]
We discuss the architectural regularization that neural networks offer when learning such systems.
We show that simple coordinate networks with few layers can be used to solve multiple problems in modelling dynamical systems.
arXiv Detail & Related papers (2023-03-10T06:21:24Z) - Decomposed Linear Dynamical Systems (dLDS) for learning the latent
components of neural dynamics [6.829711787905569]
We propose a new decomposed dynamical system model that represents complex non-stationary and nonlinear dynamics of time series data.
Our model is trained through a dictionary learning procedure, where we leverage recent results in tracking sparse vectors over time.
In both continuous-time and discrete-time instructional examples we demonstrate that our model can well approximate the original system.
arXiv Detail & Related papers (2022-06-07T02:25:38Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z) - Learn to cycle: Time-consistent feature discovery for action recognition [83.43682368129072]
Generalizing over temporal variations is a prerequisite for effective action recognition in videos.
We introduce Squeeze Re Temporal Gates (SRTG), an approach that favors temporal activations with potential variations.
We show consistent improvement when using SRTPG blocks, with only a minimal increase in the number of GFLOs.
arXiv Detail & Related papers (2020-06-15T09:36:28Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z) - Learning Stable Deep Dynamics Models [91.90131512825504]
We propose an approach for learning dynamical systems that are guaranteed to be stable over the entire state space.
We show that such learning systems are able to model simple dynamical systems and can be combined with additional deep generative models to learn complex dynamics.
arXiv Detail & Related papers (2020-01-17T00:04:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.