Recursive Dynamics in Fast-Weights Homeostatic Reentry Networks: Toward Reflective Intelligence
- URL: http://arxiv.org/abs/2511.06798v1
- Date: Mon, 10 Nov 2025 07:36:45 GMT
- Title: Recursive Dynamics in Fast-Weights Homeostatic Reentry Networks: Toward Reflective Intelligence
- Authors: B. G. Chae,
- Abstract summary: This study introduces the Fast-Weights Homeostatic Reentry Layer (FH-RL), a neural mechanism that integrates fast-weight associative memory, homeostatic regularization, and learned reentrant feedback to approximate self-referential computation in neural networks.<n>We conduct controlled experiments sweeping the reentry gain $gamma$ and evaluate emergent internal dynamics using three novel metrics: the Information Reentry Ratio (IRR), Eigen-Spectrum Recursion Index (ESRI), and Representational Drift Periodicity (RDP)<n>These findings provide quantitative evidence that reflective, thought-like internal processing can arise from a balance between feedback
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study introduces the Fast-Weights Homeostatic Reentry Layer (FH-RL), a neural mechanism that integrates fast-weight associative memory, homeostatic regularization, and learned reentrant feedback to approximate self-referential computation in neural networks. Unlike standard transformer architectures that operate in a purely feedforward manner during inference, FH-RL enables internal recurrence without external looping, allowing prior latent states to be dynamically re-entered into the ongoing computation stream. We conduct controlled experiments sweeping the reentry gain $\gamma$ and evaluate emergent internal dynamics using three novel metrics: the Information Reentry Ratio (IRR), Eigen-Spectrum Recursion Index (ESRI), and Representational Drift Periodicity (RDP). Results show that reentry quantity increases proportionally with~$\gamma$, while the learned feedback matrix $W_r$ remains bounded and becomes more structured at moderate gains. Critically, a stable reflective band emerges around $\gamma \approx 0.10-0.20$, where internal feedback is maximally expressive yet spectrally stable: IRR rises smoothly, ESRI remains near zero, and RDP exhibits consistent low-frequency cycles. These findings provide quantitative evidence that reflective, thought-like internal processing can arise from a principled balance between feedback amplification and homeostatic regulation, linking modern fast-weight architectures to theories of cortical reentry and recursive cognition.
Related papers
- Adaptive Visual Autoregressive Acceleration via Dual-Linkage Entropy Analysis [50.48301331112126]
We propose NOVA, a training-free token reduction acceleration framework for Visual AutoRegressive modeling.<n>NOVA adaptively determines the acceleration activation scale during inference by online identifying the inflection point of scale entropy growth.<n>Experiments and analyses validate NOVA as a simple yet effective training-free acceleration framework.
arXiv Detail & Related papers (2026-02-01T17:29:42Z) - Knowledge-Informed Kernel State Reconstruction for Interpretable Dynamical System Discovery [46.9843470803458]
MAAT (Model Aware Approximation of Trajectories) is a framework for symbolic discovery built on knowledge-informed Kernel State Reconstruction.<n>It substantially reduces state-estimation MSE for trajectories and derivatives used by downstream symbolic regression.
arXiv Detail & Related papers (2026-01-29T21:15:52Z) - Continuous-Time Homeostatic Dynamics for Reentrant Inference Models [0.0]
We formulate the Fast-Weights Homeostatic Reentry Network as a continuous-time neural-ODE system.<n>The dynamics admit bounded attractors governed by an energy functional, yielding a ring-like manifold.<n>Unlike continuous-time recurrent neural networks or liquid neural networks, FHRN achieves stability through population-level gain modulation rather than fixed recurrence or neuron-local time adaptation.
arXiv Detail & Related papers (2025-12-04T07:33:13Z) - Physics-Informed Neural ODEs with Scale-Aware Residuals for Learning Stiff Biophysical Dynamics [4.285464959472458]
We introduce PhysicsInformed Neural ODEs with Scale-Aware Residuals (PI-NODE-SR)<n>This framework combines a low-order explicit solver (Heun method) residual normalisation to balance contributions between state variables evolving on disparate timescales.<n>It learns from a single oscillation simulated with a stiff solver (Rodas5P) and extrapolates beyond 100 ms, capturing both oscillation frequency and near-correct amplitudes.
arXiv Detail & Related papers (2025-11-13T06:52:11Z) - Self-induced stochastic resonance: A physics-informed machine learning approach [0.0]
Self-induced resonance (SISR) is the emergence of coherent oscillations in excitable systems driven solely by noise.<n>This work presents a physics-informed machine learning framework for modeling and predicting SISR in the FitzHugh neuron.
arXiv Detail & Related papers (2025-10-26T21:49:20Z) - PACR: Progressively Ascending Confidence Reward for LLM Reasoning [55.06373646059141]
We propose Progressively Ascending Confidence Reward (PACR)<n>PACR is a dense, model-intrinsic reward computed directly from the model's evolving belief in the correct answer.<n>Our results suggest that dense, model-intrinsic shaping signals can make RLVR training more effective and reliable.
arXiv Detail & Related papers (2025-10-25T11:25:35Z) - Beyond Ensembles: Simulating All-Atom Protein Dynamics in a Learned Latent Space [4.5211402678313135]
We introduce the Graph Latent Dynamics Propagator (GLDP), a modular component for simulating dynamics within the learned latent space of LD-FPG.<n>We compare three classes of propagators: score-guided Langevin dynamics, (ii) Koopman-based linear operators, and (iii) autoregressive neural networks.<n>Within a unified encoder-propagator-decoder framework, we evaluate long-horizon stability, backbone and side-chain ensemble fidelity, and functional free-energy landscapes.
arXiv Detail & Related papers (2025-09-02T11:09:06Z) - Generative System Dynamics in Recurrent Neural Networks [56.958984970518564]
We investigate the continuous time dynamics of Recurrent Neural Networks (RNNs)<n>We show that skew-symmetric weight matrices are fundamental to enable stable limit cycles in both linear and nonlinear configurations.<n> Numerical simulations showcase how nonlinear activation functions not only maintain limit cycles, but also enhance the numerical stability of the system integration process.
arXiv Detail & Related papers (2025-04-16T10:39:43Z) - Fast Training of Recurrent Neural Networks with Stationary State Feedbacks [48.22082789438538]
Recurrent neural networks (RNNs) have recently demonstrated strong performance and faster inference than Transformers.<n>We propose a novel method that replaces BPTT with a fixed gradient feedback mechanism.
arXiv Detail & Related papers (2025-03-29T14:45:52Z) - Gated Recurrent Neural Networks with Weighted Time-Delay Feedback [55.596897987498174]
We present a novel approach to modeling long-term dependencies in sequential data by introducing a gated recurrent unit (GRU) with a weighted time-delay feedback mechanism.<n>Our proposed model, named $tau$-GRU, is a discretized version of a continuous-time formulation of a recurrent unit, where the dynamics are governed by delay differential equations (DDEs)
arXiv Detail & Related papers (2022-12-01T02:26:34Z) - Towards performant and reliable undersampled MR reconstruction via
diffusion model sampling [67.73698021297022]
DiffuseRecon is a novel diffusion model-based MR reconstruction method.
It guides the generation process based on the observed signals.
It does not require additional training on specific acceleration factors.
arXiv Detail & Related papers (2022-03-08T02:25:38Z) - Lipschitz Recurrent Neural Networks [100.72827570987992]
We show that our Lipschitz recurrent unit is more robust with respect to input and parameter perturbations as compared to other continuous-time RNNs.
Our experiments demonstrate that the Lipschitz RNN can outperform existing recurrent units on a range of benchmark tasks.
arXiv Detail & Related papers (2020-06-22T08:44:52Z) - Multivariate Functional Regression via Nested Reduced-Rank
Regularization [2.730097437607271]
We propose a nested reduced-rank regression (NRRR) approach in fitting regression model with multivariate functional responses and predictors.
We show through non-asymptotic analysis that NRRR can achieve at least a comparable error rate to that of the reduced-rank regression.
We apply NRRR in an electricity demand problem, to relate the trajectories of the daily electricity consumption with those of the daily temperatures.
arXiv Detail & Related papers (2020-03-10T14:58:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.