On the dynamics of convolutional recurrent neural networks near their critical point
- URL: http://arxiv.org/abs/2405.13854v1
- Date: Wed, 22 May 2024 17:29:12 GMT
- Title: On the dynamics of convolutional recurrent neural networks near their critical point
- Authors: Aditi Chandra, Marcelo O. Magnasco,
- Abstract summary: We study the dynamical properties of a single-layer convolutional recurrent network with a smooth sigmoidal activation function.
We present analytical solutions for the steady states when the network is forced with a single oscillation.
We derive the relationships shaping the value of the temporal decay and spatial propagation length as a function of this background value.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We examine the dynamical properties of a single-layer convolutional recurrent network with a smooth sigmoidal activation function, for small values of the inputs and when the convolution kernel is unitary, so all eigenvalues lie exactly at the unit circle. Such networks have a variety of hallmark properties: the outputs depend on the inputs via compressive nonlinearities such as cubic roots, and both the timescales of relaxation and the length-scales of signal propagation depend sensitively on the inputs as power laws, both diverging as the input to 0. The basic dynamical mechanism is that inputs to the network generate ongoing activity, which in turn controls how additional inputs or signals propagate spatially or attenuate in time. We present analytical solutions for the steady states when the network is forced with a single oscillation and when a background value creates a steady state of ongoing activity, and derive the relationships shaping the value of the temporal decay and spatial propagation length as a function of this background value.
Related papers
- Input-driven circuit reconfiguration in critical recurrent neural networks.Marcelo O. Magnasco [0.0]
We present a very simple single-layer recurrent network, whose pathways can be reconfigured "on fly" using only its inputs.
We show this network solves the classical connectedness problem, by allowing signal propagation only along the regions to be evaluated.
arXiv Detail & Related papers (2024-05-23T20:15:23Z) - Leveraging Low-Rank and Sparse Recurrent Connectivity for Robust
Closed-Loop Control [63.310780486820796]
We show how a parameterization of recurrent connectivity influences robustness in closed-loop settings.
We find that closed-form continuous-time neural networks (CfCs) with fewer parameters can outperform their full-rank, fully-connected counterparts.
arXiv Detail & Related papers (2023-10-05T21:44:18Z) - Intensity Profile Projection: A Framework for Continuous-Time
Representation Learning for Dynamic Networks [50.2033914945157]
We present a representation learning framework, Intensity Profile Projection, for continuous-time dynamic network data.
The framework consists of three stages: estimating pairwise intensity functions, learning a projection which minimises a notion of intensity reconstruction error.
Moreoever, we develop estimation theory providing tight control on the error of any estimated trajectory, indicating that the representations could even be used in quite noise-sensitive follow-on analyses.
arXiv Detail & Related papers (2023-06-09T15:38:25Z) - Input correlations impede suppression of chaos and learning in balanced
rate networks [58.720142291102135]
Information encoding and learning in neural circuits depend on how well time-varying stimuli can control spontaneous network activity.
We show that in firing-rate networks in the balanced state, external control of recurrent dynamics, strongly depends on correlations in the input.
arXiv Detail & Related papers (2022-01-24T19:20:49Z) - Relational Self-Attention: What's Missing in Attention for Video
Understanding [52.38780998425556]
We introduce a relational feature transform, dubbed the relational self-attention (RSA)
Our experiments and ablation studies show that the RSA network substantially outperforms convolution and self-attention counterparts.
arXiv Detail & Related papers (2021-11-02T15:36:11Z) - Periodic Activation Functions Induce Stationarity [19.689175123261613]
We show that periodic activation functions in Bayesian neural networks establish a connection between the prior on the network weights and translation-invariant, stationary Gaussian process priors.
In a series of experiments, we show that periodic activation functions obtain comparable performance for in-domain data and capture sensitivity to perturbed inputs in deep neural networks for out-of-domain detection.
arXiv Detail & Related papers (2021-10-26T11:10:37Z) - Residual networks classify inputs based on their neural transient
dynamics [0.0]
We show analytically that there is a cooperation and competition dynamics between residuals corresponding to each input dimension.
In cases where residuals do not converge to an attractor state, their internal dynamics are separable for each input class, and the network can reliably approximate the output.
arXiv Detail & Related papers (2021-01-08T13:54:37Z) - Feedback-induced instabilities and dynamics in the Jaynes-Cummings model [62.997667081978825]
We investigate the coherence and steady-state properties of the Jaynes-Cummings model subjected to time-delayed coherent feedback.
The introduced feedback qualitatively modifies the dynamical response and steady-state quantum properties of the system.
arXiv Detail & Related papers (2020-06-20T10:07:01Z) - Stability of Internal States in Recurrent Neural Networks Trained on
Regular Languages [0.0]
We study the stability of neural networks trained to recognize regular languages.
In this saturated regime, analysis of the network activation shows a set of clusters that resemble discrete states in a finite state machine.
We show that transitions between these states in response to input symbols are deterministic and stable.
arXiv Detail & Related papers (2020-06-18T19:50:15Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.