Analysing Rescaling, Discretization, and Linearization in RNNs for Neural System Modelling
- URL: http://arxiv.org/abs/2312.15974v6
- Date: Sun, 06 Apr 2025 12:27:25 GMT
- Title: Analysing Rescaling, Discretization, and Linearization in RNNs for Neural System Modelling
- Authors: Mariano Caruso, Cecilia Jarne,
- Abstract summary: Recurrent Neural Networks (RNNs) are widely used for modelling neural activity, yet the mathematical interplay of core procedures is uncharacterized.<n>This study establishes the conditions under which these procedures commute, enabling flexible application in computational neuroscience.<n>Our findings directly guide the design of biologically plausible RNNs for simulating neural dynamics in decision-making and motor control.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recurrent Neural Networks (RNNs) are widely used for modelling neural activity, yet the mathematical interplay of core procedures is used to analyze them (temporal rescaling, discretization, and linearization) remain uncharacterized. This study establishes the conditions under which these procedures commute, enabling flexible application in computational neuroscience. We rigorously analyze the mathematical foundations of the three procedures, formalizing their application to continuous-time RNN dynamics governed by differential equations. By deriving transformed equations under rescaling, discretization, and linearization, we determine commutativity criteria and evaluate their effects on network stability, numerical implementation, and linear approximations. We demonstrate that rescaling and discretization commute when time-step adjustments align with scaling factors. Similarly, linearization and discretization (or rescaling) yield equivalent dynamics regardless of order, provided activation functions operate near equilibrium points. Our findings directly guide the design of biologically plausible RNNs for simulating neural dynamics in decision-making and motor control, where temporal alignment and stability are critical
Related papers
- Generative System Dynamics in Recurrent Neural Networks [56.958984970518564]
We investigate the continuous time dynamics of Recurrent Neural Networks (RNNs)
We show that skew-symmetric weight matrices are fundamental to enable stable limit cycles in both linear and nonlinear configurations.
Numerical simulations showcase how nonlinear activation functions not only maintain limit cycles, but also enhance the numerical stability of the system integration process.
arXiv Detail & Related papers (2025-04-16T10:39:43Z) - Hybrid Time-Domain Behavior Model Based on Neural Differential Equations and RNNs [3.416692407056595]
This paper presents a novel continuous-time domain hybrid modeling paradigm.
It integrates neural network differential models with recurrent neural networks (RNNs), creating NODE-RNN and NCDE-RNN models.
Theoretical analysis shows that this hybrid model has mathematical advantages in event-driven dynamic mutation response and propagation stability.
arXiv Detail & Related papers (2025-03-28T10:42:52Z) - Unconditional stability of a recurrent neural circuit implementing divisive normalization [0.0]
We prove the remarkable property of unconditional local stability for an arbitrary-dimensional ORGaNICs circuit.
We show that ORGaNICs can be trained by backpropagation through time without gradient clipping/scaling.
arXiv Detail & Related papers (2024-09-27T17:46:05Z) - Modeling Latent Neural Dynamics with Gaussian Process Switching Linear Dynamical Systems [2.170477444239546]
We develop an approach that balances these two objectives: the Gaussian Process Switching Linear Dynamical System (gpSLDS)
Our method builds on previous work modeling the latent state evolution via a differential equation whose nonlinear dynamics are described by a Gaussian process (GP-SDEs)
Our approach resolves key limitations of the rSLDS such as artifactual oscillations in dynamics near discrete state boundaries, while also providing posterior uncertainty estimates of the dynamics.
arXiv Detail & Related papers (2024-07-19T15:32:15Z) - Episodic Memory Theory for the Mechanistic Interpretation of Recurrent
Neural Networks [3.683202928838613]
We propose the Episodic Memory Theory (EMT), illustrating that RNNs can be conceptualized as discrete-time analogs of the recently proposed General Sequential Episodic Memory Model.
We introduce a novel set of algorithmic tasks tailored to probe the variable binding behavior in RNNs.
Our empirical investigations reveal that trained RNNs consistently converge to the variable binding circuit, thus indicating universality in the dynamics of RNNs.
arXiv Detail & Related papers (2023-10-03T20:52:37Z) - On the Trade-off Between Efficiency and Precision of Neural Abstraction [62.046646433536104]
Neural abstractions have been recently introduced as formal approximations of complex, nonlinear dynamical models.
We employ formal inductive synthesis procedures to generate neural abstractions that result in dynamical models with these semantics.
arXiv Detail & Related papers (2023-07-28T13:22:32Z) - A Recursively Recurrent Neural Network (R2N2) Architecture for Learning
Iterative Algorithms [64.3064050603721]
We generalize Runge-Kutta neural network to a recurrent neural network (R2N2) superstructure for the design of customized iterative algorithms.
We demonstrate that regular training of the weight parameters inside the proposed superstructure on input/output data of various computational problem classes yields similar iterations to Krylov solvers for linear equation systems, Newton-Krylov solvers for nonlinear equation systems, and Runge-Kutta solvers for ordinary differential equations.
arXiv Detail & Related papers (2022-11-22T16:30:33Z) - Learning Low Dimensional State Spaces with Overparameterized Recurrent
Neural Nets [57.06026574261203]
We provide theoretical evidence for learning low-dimensional state spaces, which can also model long-term memory.
Experiments corroborate our theory, demonstrating extrapolation via learning low-dimensional state spaces with both linear and non-linear RNNs.
arXiv Detail & Related papers (2022-10-25T14:45:15Z) - Efficient, probabilistic analysis of combinatorial neural codes [0.0]
neural networks encode inputs in the form of combinations of individual neurons' activities.
These neural codes present a computational challenge due to their high dimensionality and often large volumes of data.
We apply methods previously applied to small examples and apply them to large neural codes generated by experiments.
arXiv Detail & Related papers (2022-10-19T11:58:26Z) - NeuralEF: Deconstructing Kernels by Deep Neural Networks [47.54733625351363]
Traditional nonparametric solutions based on the Nystr"om formula suffer from scalability issues.
Recent work has resorted to a parametric approach, i.e., training neural networks to approximate the eigenfunctions.
We show that these problems can be fixed by using a new series of objective functions that generalizes to space of supervised and unsupervised learning problems.
arXiv Detail & Related papers (2022-04-30T05:31:07Z) - Recurrent Neural Networks for Dynamical Systems: Applications to
Ordinary Differential Equations, Collective Motion, and Hydrological Modeling [0.20999222360659606]
We train and test RNNs uniquely in each task to demonstrate the broad applicability of RNNs in reconstruction and forecasting the dynamics of dynamical systems.
We analyze the performance of RNNs applied to three tasks: reconstruction of correct Lorenz solutions for a system with an error formulation, reconstruction of corrupted collective motion, trajectories, and forecasting of streamflow time series possessing spikes.
arXiv Detail & Related papers (2022-02-14T20:34:49Z) - Learning Deep Morphological Networks with Neural Architecture Search [19.731352645511052]
We propose a method based on meta-learning to incorporate morphological operators into Deep Neural Networks.
The learned architecture demonstrates how our novel morphological operations significantly increase DNN performance on various tasks.
arXiv Detail & Related papers (2021-06-14T19:19:48Z) - Incorporating NODE with Pre-trained Neural Differential Operator for
Learning Dynamics [73.77459272878025]
We propose to enhance the supervised signal in learning dynamics by pre-training a neural differential operator (NDO)
NDO is pre-trained on a class of symbolic functions, and it learns the mapping between the trajectory samples of these functions to their derivatives.
We provide theoretical guarantee on that the output of NDO can well approximate the ground truth derivatives by proper tuning the complexity of the library.
arXiv Detail & Related papers (2021-06-08T08:04:47Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Lipschitz Recurrent Neural Networks [100.72827570987992]
We show that our Lipschitz recurrent unit is more robust with respect to input and parameter perturbations as compared to other continuous-time RNNs.
Our experiments demonstrate that the Lipschitz RNN can outperform existing recurrent units on a range of benchmark tasks.
arXiv Detail & Related papers (2020-06-22T08:44:52Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z) - Recurrent Neural Network Learning of Performance and Intrinsic
Population Dynamics from Sparse Neural Data [77.92736596690297]
We introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics.
We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model.
Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons.
arXiv Detail & Related papers (2020-05-05T14:16:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.