Smooth embeddings in contracting recurrent networks driven by regular dynamics: A synthesis for neural representation
- URL: http://arxiv.org/abs/2601.19019v1
- Date: Mon, 26 Jan 2026 23:10:39 GMT
- Title: Smooth embeddings in contracting recurrent networks driven by regular dynamics: A synthesis for neural representation
- Authors: Vikas N. O'Reilly-Shah, Alessandro Maria Selvitella,
- Abstract summary: Recent empirical work has documented topology-preserving latent organization in trained recurrent models.<n>Recent theoretical results in reservoir computing establish conditions under which the synchronization map is an embedding.<n>Our contribution is an integrated framework that assembles generalized synchronization and embedding guarantees for contracting reservoirs.
- Score: 45.88028371034407
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recurrent neural networks trained for time-series prediction often develop latent trajectories that preserve qualitative structure of the dynamical systems generating their inputs. Recent empirical work has documented topology-preserving latent organization in trained recurrent models, and recent theoretical results in reservoir computing establish conditions under which the synchronization map is an embedding. Here we synthesize these threads into a unified account of when contracting recurrent networks yield smooth, topology-preserving internal representations for a broad and biologically relevant class of inputs: regular dynamics on invariant circles and tori. Our contribution is an integrated framework that assembles (i) generalized synchronization and embedding guarantees for contracting reservoirs, (ii) regularity mechanisms ensuring differentiability of the synchronization map under mild constraints, and (iii) a base-system viewpoint in which the invariant manifold generating the input stream is treated as the driving system. In this regular setting, the conditions commonly viewed as restrictive in chaotic-attractor analyses become mild and readily satisfied by standard contractive architectures. The framework clarifies how representational content in recurrent circuits is inherently historical: the network state encodes finite windows of input history rather than instantaneous stimuli. By consolidating disparate empirical and theoretical results under common assumptions, the synthesis yields concrete, testable expectations about when prediction-trained recurrent circuits should (or should not) form smooth latent embeddings and how required state dimension scales with the intrinsic dimension of the driving dynamics.
Related papers
- On the Generalization Behavior of Deep Residual Networks From a Dynamical System Perspective [1.0388986221727612]
Deep neural networks (DNNs) have significantly advanced machine learning, with model depth playing a central role in their successes.<n>In this work, we establish generalization error bounds for both discrete- and continuous-time residual networks (ResNets) by combining Rademacher complexity, flow maps of dynamical systems, and the convergence behavior of ResNets in the deep-layer limit.<n>Findings provide a unified understanding of generalization across both discrete- and continuous-time ResNets, helping to close the gap in both the order of sample complexity and assumptions between the discrete- and continuous-time settings.
arXiv Detail & Related papers (2026-02-24T13:59:06Z) - KoopGen: Koopman Generator Networks for Representing and Predicting Dynamical Systems with Continuous Spectra [65.11254608352982]
We introduce a generator-based neural Koopman framework that models dynamics through a structured, state-dependent representation of Koopman generators.<n>By exploiting the intrinsic Cartesian decomposition into skew-adjoint and self-adjoint components, KoopGen separates conservative transport from irreversible dissipation.
arXiv Detail & Related papers (2026-02-15T06:32:23Z) - Constraint Breeds Generalization: Temporal Dynamics as an Inductive Bias [1.219017431258669]
We show that constraints shape dynamics to function not as limitations, but as a temporal inductive bias that breeds generalization.<n>We show that robust AI development requires not only scaling and removing limitations, but computationally mastering the temporal characteristics that naturally promote generalization.
arXiv Detail & Related papers (2025-12-30T00:34:24Z) - Parallel BiLSTM-Transformer networks for forecasting chaotic dynamics [24.960864709838436]
This study proposes a parallel predictive framework integrating Transformer and Bidirectional Long Short-Term Memory networks.<n>The proposed hybrid model employs a dual-branch architecture, where the Transformer branch mainly captures long-range dependencies.<n>The results consistently indicate that the proposed hybrid framework outperforms both single-branch architectures across tasks.
arXiv Detail & Related papers (2025-10-27T16:17:10Z) - Kuramoto Orientation Diffusion Models [67.0711709825854]
Orientation-rich images, such as fingerprints and textures, often exhibit coherent angular patterns.<n>Motivated by the role of phase synchronization in biological systems, we propose a score-based generative model.<n>We implement competitive results on general image benchmarks and significantly improves generation quality on orientation-dense datasets like fingerprints and textures.
arXiv Detail & Related papers (2025-09-18T18:18:49Z) - PowerGrow: Feasible Co-Growth of Structures and Dynamics for Power Grid Synthesis [75.14189839277928]
We present PowerGrow, a co-generative framework that significantly reduces computational overhead while maintaining operational validity.<n> Experiments across benchmark settings show that PowerGrow outperforms prior diffusion models in fidelity and diversity.<n>This demonstrates its ability to generate operationally valid and realistic power grid scenarios.
arXiv Detail & Related papers (2025-08-29T01:47:27Z) - Topology-Aware Conformal Prediction for Stream Networks [68.02503121089633]
We propose Spatio-Temporal Adaptive Conformal Inference (textttCISTA), a novel framework that integrates network topology and temporal dynamics into the conformal prediction framework.<n>Our results show that textttCISTA effectively balances prediction efficiency and coverage, outperforming existing conformal prediction methods for stream networks.
arXiv Detail & Related papers (2025-03-06T21:21:15Z) - Time-series Generation by Contrastive Imitation [87.51882102248395]
We study a generative framework that seeks to combine the strengths of both: Motivated by a moment-matching objective to mitigate compounding error, we optimize a local (but forward-looking) transition policy.
At inference, the learned policy serves as the generator for iterative sampling, and the learned energy serves as a trajectory-level measure for evaluating sample quality.
arXiv Detail & Related papers (2023-11-02T16:45:25Z) - Latent Event-Predictive Encodings through Counterfactual Regularization [0.9449650062296823]
We introduce a SUrprise-GAted Recurrent neural network (SUGAR) using a novel form of counterfactual regularization.
We test the model on a hierarchical sequence prediction task, where sequences are generated by alternating hidden graph structures.
arXiv Detail & Related papers (2021-05-12T18:30:09Z) - Supporting Optimal Phase Space Reconstructions Using Neural Network
Architecture for Time Series Modeling [68.8204255655161]
We propose an artificial neural network with a mechanism to implicitly learn the phase spaces properties.
Our approach is either as competitive as or better than most state-of-the-art strategies.
arXiv Detail & Related papers (2020-06-19T21:04:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.