A Dynamical Theory of Sequential Retrieval in Input-Driven Hopfield Networks
- URL: http://arxiv.org/abs/2603.03201v2
- Date: Thu, 05 Mar 2026 11:03:58 GMT
- Title: A Dynamical Theory of Sequential Retrieval in Input-Driven Hopfield Networks
- Authors: Simone Betteti, Giacomo Baggio, Sandro Zampieri,
- Abstract summary: This work develops a theory of sequential reasoning in Hopfield networks.<n>We derive explicit conditions for self-sustained memory transitions, including gain thresholds, escape times, and collapse regimes.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reasoning is the ability to integrate internal states and external inputs in a meaningful and semantically consistent flow. Contemporary machine learning (ML) systems increasingly rely on such sequential reasoning, from language understanding to multi-modal generation, often operating over dictionaries of prototypical patterns reminiscent of associative memory models. Understanding retrieval and sequentiality in associative memory models provides a powerful bridge to gain insight into ML reasoning. While the static retrieval properties of associative memory models are well understood, the theoretical foundations of sequential retrieval and multi-memory integration remain limited, with existing studies largely relying on numerical evidence. This work develops a dynamical theory of sequential reasoning in Hopfield networks. We consider the recently proposed input-driven plasticity (IDP) Hopfield network and analyze a two-timescale architecture coupling fast associative retrieval with slow reasoning dynamics. We derive explicit conditions for self-sustained memory transitions, including gain thresholds, escape times, and collapse regimes. Together, these results provide a principled mathematical account of sequentiality in associative memory models, bridging classical Hopfield dynamics and modern reasoning architectures.
Related papers
- The AI Hippocampus: How Far are We From Human Memory? [77.04745635827278]
Implicit memory refers to the knowledge embedded within the internal parameters of pre-trained transformers.<n>Explicit memory involves external storage and retrieval components designed to augment model outputs with dynamic, queryable knowledge representations.<n>Agentic memory introduces persistent, temporally extended memory structures within autonomous agents.
arXiv Detail & Related papers (2026-01-14T03:24:08Z) - Improving Multi-step RAG with Hypergraph-based Memory for Long-Context Complex Relational Modeling [83.29209853451697]
Multi-step retrieval-augmented generation (RAG) has become a widely adopted strategy for enhancing large language models (LLMs)<n>We introduce HGMem, a hypergraph-based memory mechanism that extends the concept of memory into a dynamic, expressive structure for complex reasoning and global understanding.<n>In our approach, memory is represented as a hypergraph whose hyperedges correspond to distinct memory units, enabling the progressive formation of higher-order interactions within memory.
arXiv Detail & Related papers (2025-12-30T03:13:10Z) - CAM: A Constructivist View of Agentic Memory for LLM-Based Reading Comprehension [55.29309306566238]
Current Large Language Models (LLMs) are confronted with overwhelming information volume when comprehending long-form documents.<n>This challenge raises the imperative of a cohesive memory module, which can elevate vanilla LLMs into autonomous reading agents.<n>We draw inspiration from Jean Piaget's Constructivist Theory, illuminating three traits of the agentic memory -- structured schemata, flexible assimilation, and dynamic accommodation.
arXiv Detail & Related papers (2025-10-07T02:16:30Z) - Beyond Memorization: Extending Reasoning Depth with Recurrence, Memory and Test-Time Compute Scaling [60.63703438729223]
We show how different architectures and training methods affect model multi-step reasoning capabilities.<n>We confirm that increasing model depth plays a crucial role for sequential computations.
arXiv Detail & Related papers (2025-08-22T18:57:08Z) - Latent Structured Hopfield Network for Semantic Association and Retrieval [52.634915010996835]
Episodic memory enables humans to recall past experiences by associating semantic elements such as objects, locations, and time into coherent event representations.<n>We propose the Latent Structured Hopfield Network (LSHN), a framework that integrates continuous Hopfield attractor dynamics into an autoencoder architecture.<n>Unlike traditional Hopfield networks, our model is trained end-to-end with gradient descent, achieving scalable and robust memory retrieval.
arXiv Detail & Related papers (2025-06-02T04:24:36Z) - Temporal Model On Quantum Logic [0.0]
The framework formalizes the evolution of propositions over time using linear and branching temporal models.<n>The hierarchical organization of memory is represented using directed acyclic graphs.
arXiv Detail & Related papers (2025-02-09T17:16:53Z) - Input-Driven Dynamics for Robust Memory Retrieval in Hopfield Networks [3.961279440272764]
The Hopfield model provides a mathematically idealized yet insightful framework for understanding the mechanisms of memory storage and retrieval in the human brain.
We propose a novel system framework in which the external input directly influences the neural synapses and shapes the energy landscape of the Hopfield model.
This plasticity-based mechanism provides a clear energetic interpretation of the memory retrieval process and proves effective at correctly classifying highly mixed inputs.
arXiv Detail & Related papers (2024-11-06T17:24:25Z) - Theoretical framework for quantum associative memories [0.8437187555622164]
Associative memory refers to the ability to relate a memory with an input and targets the restoration of corrupted patterns.
We develop a comprehensive framework for a quantum associative memory based on open quantum system dynamics.
arXiv Detail & Related papers (2024-08-26T13:46:47Z) - Understanding the Language Model to Solve the Symbolic Multi-Step Reasoning Problem from the Perspective of Buffer Mechanism [68.05754701230039]
We construct a symbolic multi-step reasoning task to investigate the information propagation mechanisms in Transformer models.<n>We propose a random matrix-based algorithm to enhance the model's reasoning ability.
arXiv Detail & Related papers (2024-05-24T07:41:26Z) - In search of dispersed memories: Generative diffusion models are
associative memory networks [6.4322891559626125]
Generative diffusion models are a type of generative machine learning techniques that have shown great performance in many tasks.
We show that generative diffusion models can be interpreted as energy-based models and that, when trained on discrete patterns, their energy function is identical to that of modern Hopfield networks.
This equivalence allows us to interpret the supervised training of diffusion models as a synaptic learning process that encodes the associative dynamics of a modern Hopfield network in the weight structure of a deep neural network.
arXiv Detail & Related papers (2023-09-29T14:48:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.