Memory as Structured Trajectories: Persistent Homology and Contextual Sheaves
- URL: http://arxiv.org/abs/2508.11646v1
- Date: Fri, 01 Aug 2025 23:03:13 GMT
- Title: Memory as Structured Trajectories: Persistent Homology and Contextual Sheaves
- Authors: Xin Li,
- Abstract summary: We introduce the delta-homology analogy, which formalizes memory as a set of sparse, topologically irreducible attractors.<n>A Dirac delta-like memory trace is identified with a nontrivial homology generator on a latent manifold of cognitive states.<n>We interpret these delta-homology generators as the low-entropy content variable, while the high-entropy context variable is represented dually as a filtration, cohomology class, or sheaf.
- Score: 5.234742752529437
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: We propose a topological framework for memory and inference grounded in the structure of spike-timing dynamics, persistent homology, and the Context-Content Uncertainty Principle (CCUP). Starting from the observation that polychronous neural groups (PNGs) encode reproducible, time-locked spike sequences shaped by axonal delays and synaptic plasticity, we construct spatiotemporal complexes whose temporally consistent transitions define chain complexes over which robust activation cycles emerge. These activation loops are abstracted into cell posets, enabling a compact and causally ordered representation of neural activity with overlapping and compositional memory traces. We introduce the delta-homology analogy, which formalizes memory as a set of sparse, topologically irreducible attractors. A Dirac delta-like memory trace is identified with a nontrivial homology generator on a latent manifold of cognitive states. Such traces are sharply localized along reproducible topological cycles and are only activated when inference trajectories complete a full cycle. They encode minimal, path-dependent memory units that cannot be synthesized from local features alone. We interpret these delta-homology generators as the low-entropy content variable, while the high-entropy context variable is represented dually as a filtration, cohomology class, or sheaf over the same latent space. Inference is recast as a dynamic alignment between content and context and coherent memory retrieval corresponds to the existence of a global section that selects and sustains a topological generator. Memory is no longer a static attractor or distributed code, but a cycle-completing, structure-aware inference process.
Related papers
- Memory-Amortized Inference: A Topological Unification of Search, Closure, and Structure [6.0044467881527614]
We propose textbfMemory-Amortized Inference (MAI), a formal framework that unifies learning and memory as phase transitions of a single geometric substrate.<n>We show that cognition operates by converting high-complexity search into low-complexity lookup.<n>This framework offers a rigorous explanation for the emergence of fast-thinking (intuition) from slow-thinking (reasoning)
arXiv Detail & Related papers (2025-11-28T16:28:24Z) - Cycle is All You Need: More Is Different [6.0044467881527614]
We propose an information-topological framework in which cycle closure is the fundamental mechanism of memory and consciousness.<n>We show that memory is not a static store but the ability to re-enter latent cycles in neural state space.<n>We conclude that cycle is all you need: persistent invariants enable generalization in non-ergodic environments.
arXiv Detail & Related papers (2025-09-15T21:48:30Z) - PowerGrow: Feasible Co-Growth of Structures and Dynamics for Power Grid Synthesis [75.14189839277928]
We present PowerGrow, a co-generative framework that significantly reduces computational overhead while maintaining operational validity.<n> Experiments across benchmark settings show that PowerGrow outperforms prior diffusion models in fidelity and diversity.<n>This demonstrates its ability to generate operationally valid and realistic power grid scenarios.
arXiv Detail & Related papers (2025-08-29T01:47:27Z) - Beyond Turing: Memory-Amortized Inference as a Foundation for Cognitive Computation [5.234742752529437]
We introduce Memory-Amortized Inference (MAI) as a formal framework in which cognition is modeled as inference over latent cycles in memory.<n>We show that MAI provides a principled foundation for Mountcastle's Universal Cortical Algorithm.<n>We briefly discuss the profound implications of MAI for achieving artificial general intelligence.
arXiv Detail & Related papers (2025-08-19T15:10:26Z) - Cycle-Consistent Helmholtz Machine: Goal-Seeded Simulation via Inverted Inference [5.234742752529437]
We introduce the emphCycle-Consistent Helmholtz Machine (C$2$HM)<n>C$2$HM reframes inference as a emphgoal-seeded, emphasymmetric process grounded in structured internal priors.<n>By offering a biologically inspired alternative to classical amortized inference, $C2$HM reconceives generative modeling as intentional simulation.
arXiv Detail & Related papers (2025-07-03T17:24:27Z) - Why Neural Network Can Discover Symbolic Structures with Gradient-based Training: An Algebraic and Geometric Foundation for Neurosymbolic Reasoning [73.18052192964349]
We develop a theoretical framework that explains how discrete symbolic structures can emerge naturally from continuous neural network training dynamics.<n>By lifting neural parameters to a measure space and modeling training as Wasserstein gradient flow, we show that under geometric constraints, the parameter measure $mu_t$ undergoes two concurrent phenomena.
arXiv Detail & Related papers (2025-06-26T22:40:30Z) - Latent Structured Hopfield Network for Semantic Association and Retrieval [52.634915010996835]
Episodic memory enables humans to recall past experiences by associating semantic elements such as objects, locations, and time into coherent event representations.<n>We propose the Latent Structured Hopfield Network (LSHN), a framework that integrates continuous Hopfield attractor dynamics into an autoencoder architecture.<n>Unlike traditional Hopfield networks, our model is trained end-to-end with gradient descent, achieving scalable and robust memory retrieval.
arXiv Detail & Related papers (2025-06-02T04:24:36Z) - Generative System Dynamics in Recurrent Neural Networks [56.958984970518564]
We investigate the continuous time dynamics of Recurrent Neural Networks (RNNs)<n>We show that skew-symmetric weight matrices are fundamental to enable stable limit cycles in both linear and nonlinear configurations.<n> Numerical simulations showcase how nonlinear activation functions not only maintain limit cycles, but also enhance the numerical stability of the system integration process.
arXiv Detail & Related papers (2025-04-16T10:39:43Z) - Allostatic Control of Persistent States in Spiking Neural Networks for perception and computation [79.16635054977068]
We introduce a novel model for updating perceptual beliefs about the environment by extending the concept of Allostasis to the control of internal representations.<n>In this paper, we focus on an application in numerical cognition, where a bump of activity in an attractor network is used as a spatial numerical representation.
arXiv Detail & Related papers (2025-03-20T12:28:08Z) - Back to the Continuous Attractor [4.866486451835401]
Continuous attractors offer a unique class of solutions for storing continuous-valued variables in recurrent system states for indefinitely long time intervals.<n>Unfortunately, continuous attractors suffer from severe structural instability in general--they are destroyed by most infinitesimal changes of the dynamical law that defines them.<n>We observe that the bifurcations from continuous attractors in theoretical neuroscience models display various structurally stable forms.<n>We build on the persistent manifold theory to explain the commonalities between bifurcations from and approximations of continuous attractors.
arXiv Detail & Related papers (2024-07-31T18:37:05Z) - On Delta-Homology Analogy: Memory as Structured Trajectories [5.234742752529437]
We introduce the emphdelta-homology analogy, which formalizes memory as a set of sparse, topologically irreducible attractors.<n>Based on the analogy, we propose a topological framework for memory and inference grounded in the structure of spike-timing dynamics and persistent homology.
arXiv Detail & Related papers (2023-03-07T19:47:01Z) - Dynamical solitons and boson fractionalization in cold-atom topological
insulators [110.83289076967895]
We study the $mathbbZ$ Bose-Hubbard model at incommensurate densities.
We show how defects in the $mathbbZ$ field can appear in the ground state, connecting different sectors.
Using a pumping argument, we show that it survives also for finite interactions.
arXiv Detail & Related papers (2020-03-24T17:31:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.