RNNs perform task computations by dynamically warping neural representations
- URL: http://arxiv.org/abs/2512.04310v1
- Date: Wed, 03 Dec 2025 23:01:43 GMT
- Title: RNNs perform task computations by dynamically warping neural representations
- Authors: Arthur Pellegrino, Angus Chadwick,
- Abstract summary: We show that recurrent neural networks (RNNs) perform computations by dynamically warping their representations of task variables.<n>By characterising the time-varying geometry of RNNs, we show that dynamic warping is a fundamental feature of their computations.
- Score: 2.2917707112773598
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Analysing how neural networks represent data features in their activations can help interpret how they perform tasks. Hence, a long line of work has focused on mathematically characterising the geometry of such "neural representations." In parallel, machine learning has seen a surge of interest in understanding how dynamical systems perform computations on time-varying input data. Yet, the link between computation-through-dynamics and representational geometry remains poorly understood. Here, we hypothesise that recurrent neural networks (RNNs) perform computations by dynamically warping their representations of task variables. To test this hypothesis, we develop a Riemannian geometric framework that enables the derivation of the manifold topology and geometry of a dynamical system from the manifold of its inputs. By characterising the time-varying geometry of RNNs, we show that dynamic warping is a fundamental feature of their computations.
Related papers
- Emergent Riemannian geometry over learning discrete computations on continuous manifolds [1.8665975431697432]
We show that signatures of discrete computations emerge in the representational geometry of neural networks as they learn.<n>We demonstrate how different learning regimes (rich vs. lazy) have contrasting metric and curvature structures, affecting the ability of the networks to generalise to unseen inputs.
arXiv Detail & Related papers (2025-11-28T20:29:06Z) - Neural Symbolic Regression of Complex Network Dynamics [28.356824329954495]
We propose Physically Inspired Neural Dynamics Regression (PI-NDSR) to automatically learn the symbolic expression of dynamics.
We evaluate our method on synthetic datasets generated by various dynamics and real datasets on disease spreading.
arXiv Detail & Related papers (2024-10-15T02:02:30Z) - Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Image segmentation with traveling waves in an exactly solvable recurrent
neural network [71.74150501418039]
We show that a recurrent neural network can effectively divide an image into groups according to a scene's structural characteristics.
We present a precise description of the mechanism underlying object segmentation in this network.
We then demonstrate a simple algorithm for object segmentation that generalizes across inputs ranging from simple geometric objects in grayscale images to natural images.
arXiv Detail & Related papers (2023-11-28T16:46:44Z) - Beyond Geometry: Comparing the Temporal Structure of Computation in
Neural Circuits with Dynamical Similarity Analysis [7.660368798066376]
We introduce a novel similarity metric that compares two systems at the level of their dynamics.
Our method opens the door to comparative analyses of the essential temporal structure of computation in neural circuits.
arXiv Detail & Related papers (2023-06-16T20:11:38Z) - Interpretable statistical representations of neural population dynamics and geometry [4.459704414303749]
We introduce a representation learning method, MARBLE, that decomposes on-manifold dynamics into local flow fields and maps them into a common latent space.
In simulated non-linear dynamical systems, recurrent neural networks, and experimental single-neuron recordings from primates and rodents, we discover emergent low-dimensional latent representations.
These representations are consistent across neural networks and animals, enabling the robust comparison of cognitive computations.
arXiv Detail & Related papers (2023-04-06T21:11:04Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Learning Deep Morphological Networks with Neural Architecture Search [19.731352645511052]
We propose a method based on meta-learning to incorporate morphological operators into Deep Neural Networks.
The learned architecture demonstrates how our novel morphological operations significantly increase DNN performance on various tasks.
arXiv Detail & Related papers (2021-06-14T19:19:48Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - The geometry of integration in text classification RNNs [20.76659136484842]
We study recurrent networks trained on a battery of both natural and synthetic text classification tasks.
We find the dynamics of these trained RNNs to be both interpretable and low-dimensional.
Our observations span multiple architectures and datasets, reflecting a common mechanism RNNs employ to perform text classification.
arXiv Detail & Related papers (2020-10-28T17:58:53Z) - Recurrent Neural Network Learning of Performance and Intrinsic
Population Dynamics from Sparse Neural Data [77.92736596690297]
We introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics.
We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model.
Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons.
arXiv Detail & Related papers (2020-05-05T14:16:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.