Cycle is All You Need: More Is Different
- URL: http://arxiv.org/abs/2509.21340v1
- Date: Mon, 15 Sep 2025 21:48:30 GMT
- Title: Cycle is All You Need: More Is Different
- Authors: Xin Li,
- Abstract summary: We propose an information-topological framework in which cycle closure is the fundamental mechanism of memory and consciousness.<n>We show that memory is not a static store but the ability to re-enter latent cycles in neural state space.<n>We conclude that cycle is all you need: persistent invariants enable generalization in non-ergodic environments.
- Score: 6.0044467881527614
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: We propose an information-topological framework in which cycle closure is the fundamental mechanism of memory and consciousness. Memory is not a static store but the ability to re-enter latent cycles in neural state space, with invariant cycles serving as carriers of meaning by filtering order-specific noise and preserving what persists across contexts. The dot-cycle dichotomy captures this: transient dots scaffold exploration, while nontrivial cycles encode low-entropy content invariants that stabilize memory. Biologically, polychronous neural groups realize 1-cycles through delay-locked spiking reinforced by STDP, nested within theta-gamma rhythms that enforce boundary cancellation. These micro-cycles compose hierarchically, extending navigation loops into general memory and cognition. The perception-action cycle introduces high-order invariance: closure holds even across sense-act alternations, generalizing ancestral homing behavior. Sheaf-cosheaf duality formalizes this process: sheaves glue perceptual fragments into global sections, cosheaves decompose global plans into actions and closure aligns top-down predictions with bottom-up cycles. Consciousness then arises as the persistence of high-order invariants that integrate (unity) yet differentiate (richness) across contexts. We conclude that cycle is all you need: persistent invariants enable generalization in non-ergodic environments with long-term coherence at minimal energetic cost.
Related papers
- Towards Multimodal Lifelong Understanding: A Dataset and Agentic Baseline [58.585692088008905]
MM-Lifelong is a dataset designed for Multimodal Lifelong Understanding.<n>Comprising 181.1 hours of footage, it is structured across Day, Week, and Month scales to capture varying temporal densities.
arXiv Detail & Related papers (2026-03-05T18:52:12Z) - Random-Matrix-Induced Simplicity Bias in Over-parameterized Variational Quantum Circuits [72.0643009153473]
We show that expressive variational ansatze enter a Haar-like universality class in which both observable expectation values and parameter gradients concentrate exponentially with system size.<n>As a consequence, the hypothesis class induced by such circuits collapses with high probability to a narrow family of near-constant functions.<n>We further show that this collapse is not unavoidable: tensor-structured VQCs, including tensor-network-based and tensor-hypernetwork parameterizations, lie outside the Haar-like universality class.
arXiv Detail & Related papers (2026-01-05T08:04:33Z) - Memory as Structured Trajectories: Persistent Homology and Contextual Sheaves [5.234742752529437]
We introduce the delta-homology analogy, which formalizes memory as a set of sparse, topologically irreducible attractors.<n>A Dirac delta-like memory trace is identified with a nontrivial homology generator on a latent manifold of cognitive states.<n>We interpret these delta-homology generators as the low-entropy content variable, while the high-entropy context variable is represented dually as a filtration, cohomology class, or sheaf.
arXiv Detail & Related papers (2025-08-01T23:03:13Z) - Information Must Flow: Recursive Bootstrapping for Information Bottleneck in Optimal Transport [5.234742752529437]
We present a unified framework that models cognition as the directed flow of information between high-entropy context and low-entropy content.<n>Inference emerges as a cycle of bidirectional interactions, bottom-up contextual disambiguation paired with top-down content reconstruction.<n>Building on this, we propose that language emerges as a symbolic transport system, externalizing latent content to synchronize inference cycles across individuals.
arXiv Detail & Related papers (2025-07-08T13:56:50Z) - Flow-Based Non-stationary Temporal Regime Causal Structure Learning [49.77103348208835]
We introduce FANTOM, a unified framework for causal discovery.<n>It handles non stationary processes along with non Gaussian and heteroscedastic noises.<n>It simultaneously infers the number of regimes and their corresponding indices and learns each regime's Directed Acyclic Graph.
arXiv Detail & Related papers (2025-06-20T15:12:43Z) - Neuron: Learning Context-Aware Evolving Representations for Zero-Shot Skeleton Action Recognition [64.56321246196859]
We propose a novel dyNamically Evolving dUal skeleton-semantic syneRgistic framework.<n>We first construct the spatial-temporal evolving micro-prototypes and integrate dynamic context-aware side information.<n>We introduce the spatial compression and temporal memory mechanisms to guide the growth of spatial-temporal micro-prototypes.
arXiv Detail & Related papers (2024-11-18T05:16:11Z) - Back to the Continuous Attractor [4.866486451835401]
Continuous attractors offer a unique class of solutions for storing continuous-valued variables in recurrent system states for indefinitely long time intervals.<n>Unfortunately, continuous attractors suffer from severe structural instability in general--they are destroyed by most infinitesimal changes of the dynamical law that defines them.<n>We observe that the bifurcations from continuous attractors in theoretical neuroscience models display various structurally stable forms.<n>We build on the persistent manifold theory to explain the commonalities between bifurcations from and approximations of continuous attractors.
arXiv Detail & Related papers (2024-07-31T18:37:05Z) - Analysis of the Memorization and Generalization Capabilities of AI
Agents: Are Continual Learners Robust? [91.682459306359]
In continual learning (CL), an AI agent learns from non-stationary data streams under dynamic environments.
In this paper, a novel CL framework is proposed to achieve robust generalization to dynamic environments while retaining past knowledge.
The generalization and memorization performance of the proposed framework are theoretically analyzed.
arXiv Detail & Related papers (2023-09-18T21:00:01Z) - On Delta-Homology Analogy: Memory as Structured Trajectories [5.234742752529437]
We introduce the emphdelta-homology analogy, which formalizes memory as a set of sparse, topologically irreducible attractors.<n>Based on the analogy, we propose a topological framework for memory and inference grounded in the structure of spike-timing dynamics and persistent homology.
arXiv Detail & Related papers (2023-03-07T19:47:01Z) - Information Topology [6.0044467881527614]
We introduce emphInformation Topology, a framework that unifies information theory and algebraic topology.<n>The starting point is the emphdot-cycle dichotomy, which separates pointwise, order-sensitive fluctuations (dots) from order-invariant, predictive structure (cycles)<n>We then define emphhomological capacity, the topological dual of Shannon capacity, as the number of independent informational cycles supported by a system.
arXiv Detail & Related papers (2022-10-07T23:54:30Z) - Fingerprint and universal Markovian closure of structured bosonic
environments [53.869623568923515]
We exploit the properties of chain mapping transformations of bosonic environments to identify a finite collection of modes able to capture the characteristic features, or fingerprint, of the environment.
We show that the Markovian closure provides a quadratic speed-up with respect to standard chain mapping techniques.
arXiv Detail & Related papers (2022-08-03T11:08:50Z) - Discrete Variational Attention Models for Language Generation [51.88612022940496]
We propose a discrete variational attention model with categorical distribution over the attention mechanism owing to the discrete nature in languages.
Thanks to the property of discreteness, the training of our proposed approach does not suffer from posterior collapse.
arXiv Detail & Related papers (2020-04-21T05:49:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.