What is Intelligence? A Cycle Closure Perspective
- URL: http://arxiv.org/abs/2404.05484v3
- Date: Sat, 04 Oct 2025 10:47:48 GMT
- Title: What is Intelligence? A Cycle Closure Perspective
- Authors: Xin Li,
- Abstract summary: We argue for a structural-dynamical account rooted in a topological closure law.<n>We show that textbfMemory-Amortized Inference (MAI) is the computational mechanism that implements SbS,$rightarrow$,CCUP through dual bootstrapping.
- Score: 6.0044467881527614
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: What is intelligence? We argue for a structural-dynamical account rooted in a topological closure law: \emph{the boundary of a boundary vanishes} ($\partial^2=0$). This principle forces transient fragments to cancel while closed cycles persist as invariants, yielding the cascade $\partial^2\!=\!0 \Rightarrow \text{cycles (invariants)} \Rightarrow \text{memory} \Rightarrow \text{prediction (intelligence)}$. Prediction requires invariance: only order-invariant cycles can stabilize the predictive substrate. This motivates the \textbf{Structure-before-Specificity (SbS)} principle, where persistent structures ($\Phi$) must stabilize before contextual specificities ($\Psi$) can be meaningfully interpreted, and is formalized by the \textbf{Context-Content Uncertainty Principle (CCUP)}, which casts cognition as dynamic alignment that minimizes the joint uncertainty $H(\Phi,\Psi)$. We show that \textbf{Memory-Amortized Inference (MAI)} is the computational mechanism that implements SbS\,$\rightarrow$\,CCUP through dual bootstrapping: \emph{temporal} bootstrapping consolidates episodic specifics into reusable latent trajectories, while \emph{spatial} bootstrapping reuses these invariants across latent manifolds. This framework explains why \emph{semantics precedes syntax}: stable cycles anchor meaning, and symbolic syntax emerges only after semantic invariants are in place. In an evolutionary perspective, the same closure law unifies the trajectory of natural intelligence: from primitive memory traces in microbes, to cyclic sensorimotor patterns in bilaterians, to semantic generalization in mammals, culminating in human symbolic abstraction by natural language. In sum, intelligence arises from the progressive collapse of specificity into structure, grounded in the closure-induced emergence of invariants.
Related papers
- Structural Analysis of Directional qLDPC Codes [5.685589351789461]
Directional codes, recently introduced by Gehér--Byfield--Ruban citeGeher2025Directional, constitute a hardware-motivated family of quantum low-density parity-check (qLDPC) codes.<n>These codes are defined by stabilizers measured by ancilla qubits executing a fixed emphdirection word (route) on square- or hex-grid connectivity.
arXiv Detail & Related papers (2026-02-22T05:59:57Z) - Identifying Intervenable and Interpretable Features via Orthogonality Regularization [48.938969291033665]
We disentangle the decoder matrix into almost orthogonal features.<n>This reduces interference and superposition between the features, while keeping performance on the target dataset essentially unchanged.<n>Our code is available under $texttthttps://github.com/mrtzmllr/sae-icm$.
arXiv Detail & Related papers (2026-02-04T16:29:14Z) - Dynamic Large Concept Models: Latent Reasoning in an Adaptive Semantic Space [56.37266873329401]
Large Language Models (LLMs) apply uniform computation to all tokens, despite language exhibiting highly non-uniform information density.<n>We propose $textbfDynamic Large Concept Models (DLCM)$, a hierarchical language modeling framework that learns semantic boundaries from latent representations and shifts from tokens to a compressed concept space where reasoning is more efficient.
arXiv Detail & Related papers (2025-12-31T04:19:33Z) - How to Tame Your LLM: Semantic Collapse in Continuous Systems [0.0]
We develop a theory of semantic dynamics for large language models by formalizing them as Continuous State Machines (CSMs)<n>We prove the Semantic characterization Theorem (SCT)<n>We extend the SCT to drifting kernels and adiabatic settings, showing that slowly preserving compactness, spectral coherence, and basin structure.
arXiv Detail & Related papers (2025-12-04T11:33:02Z) - Memory-Amortized Inference: A Topological Unification of Search, Closure, and Structure [6.0044467881527614]
We propose textbfMemory-Amortized Inference (MAI), a formal framework that unifies learning and memory as phase transitions of a single geometric substrate.<n>We show that cognition operates by converting high-complexity search into low-complexity lookup.<n>This framework offers a rigorous explanation for the emergence of fast-thinking (intuition) from slow-thinking (reasoning)
arXiv Detail & Related papers (2025-11-28T16:28:24Z) - Cycle is All You Need: More Is Different [6.0044467881527614]
We propose an information-topological framework in which cycle closure is the fundamental mechanism of memory and consciousness.<n>We show that memory is not a static store but the ability to re-enter latent cycles in neural state space.<n>We conclude that cycle is all you need: persistent invariants enable generalization in non-ergodic environments.
arXiv Detail & Related papers (2025-09-15T21:48:30Z) - Scientific Machine Learning of Chaotic Systems Discovers Governing Equations for Neural Populations [0.05804487044220691]
We introduce the PEM-UDE method to extract interpretable mathematical expressions from chaotic dynamical systems.<n>When applied to neural populations, our method derives novel governing equations that respect biological constraints.<n>These equations predict an emergent relationship between connection density and both oscillation frequency and synchrony in neural circuits.
arXiv Detail & Related papers (2025-07-04T14:57:58Z) - From Memories to Maps: Mechanisms of In-Context Reinforcement Learning in Transformers [2.4554686192257424]
We train a transformer to in-context reinforcement learn in a distribution of planning tasks inspired by rodent behavior.<n>We characterize the learning algorithms that emerge in the model.<n>We find that memory may serve as a computational resource, storing both raw experience and cached computations to support flexible behavior.
arXiv Detail & Related papers (2025-06-24T14:55:43Z) - Allostatic Control of Persistent States in Spiking Neural Networks for perception and computation [79.16635054977068]
We introduce a novel model for updating perceptual beliefs about the environment by extending the concept of Allostasis to the control of internal representations.
In this paper, we focus on an application in numerical cognition, where a bump of activity in an attractor network is used as a spatial numerical representation.
arXiv Detail & Related papers (2025-03-20T12:28:08Z) - Towards a Sharp Analysis of Offline Policy Learning for $f$-Divergence-Regularized Contextual Bandits [49.96531901205305]
We analyze $f$-divergence-regularized offline policy learning.<n>For reverse Kullback-Leibler (KL) divergence, we give the first $tildeO(epsilon-1)$ sample complexity under single-policy concentrability.<n>We extend our analysis to dueling bandits, and we believe these results take a significant step toward a comprehensive understanding of $f$-divergence-regularized policy learning.
arXiv Detail & Related papers (2025-02-09T22:14:45Z) - Don't Cut Corners: Exact Conditions for Modularity in Biologically Inspired Representations [52.48094670415497]
We develop a theory of when biologically inspired representations modularise with respect to source variables (sources)
We derive necessary and sufficient conditions on a sample of sources that determine whether the neurons in an optimal biologically-inspired linear autoencoder modularise.
Our theory applies to any dataset, extending far beyond the case of statistical independence studied in previous work.
arXiv Detail & Related papers (2024-10-08T17:41:37Z) - An Overlooked Role of Context-Sensitive Dendrites [2.225268436173329]
We show that context-sensitive (CS)-TPNs flexibly integrate C moment-by-moment with the FF somatic current at the soma.
This enables the propagation of more coherent signals (bursts), making learning faster with fewer neurons.
arXiv Detail & Related papers (2024-08-20T17:18:54Z) - CREIMBO: Cross-Regional Ensemble Interactions in Multi-view Brain Observations [3.3713037259290255]
Current analysis methods often fail to harness the richness of such data.
CREIMBO identifies the hidden composition of per-session neural ensembles through graph-driven dictionary learning.
We demonstrate CREIMBO's ability to recover true components in synthetic data.
arXiv Detail & Related papers (2024-05-27T17:48:32Z) - Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - Inferring Inference [7.11780383076327]
We develop a framework for inferring canonical distributed computations from large-scale neural activity patterns.
We simulate recordings for a model brain that implicitly implements an approximate inference algorithm on a probabilistic graphical model.
Overall, this framework provides a new tool for discovering interpretable structure in neural recordings.
arXiv Detail & Related papers (2023-10-04T22:12:11Z) - Grounded Object Centric Learning [46.091323528165205]
We present emphtextscConditional textscSlot textscAttention (textscCoSA) using a novel concept of emphGrounded Slot Dictionary (GSD) inspired by vector quantization.
We demonstrate the benefits of our method in multiple downstream tasks such as scene generation, composition, and task adaptation.
arXiv Detail & Related papers (2023-07-18T17:11:55Z) - Information Topology [6.0044467881527614]
We introduce emphInformation Topology, a framework that unifies information theory and algebraic topology.<n>The starting point is the emphdot-cycle dichotomy, which separates pointwise, order-sensitive fluctuations (dots) from order-invariant, predictive structure (cycles)<n>We then define emphhomological capacity, the topological dual of Shannon capacity, as the number of independent informational cycles supported by a system.
arXiv Detail & Related papers (2022-10-07T23:54:30Z) - Neural-Symbolic Recursive Machine for Systematic Generalization [113.22455566135757]
We introduce the Neural-Symbolic Recursive Machine (NSR), whose core is a Grounded Symbol System (GSS)
NSR integrates neural perception, syntactic parsing, and semantic reasoning.
We evaluate NSR's efficacy across four challenging benchmarks designed to probe systematic generalization capabilities.
arXiv Detail & Related papers (2022-10-04T13:27:38Z) - Towards Antisymmetric Neural Ansatz Separation [48.80300074254758]
We study separations between two fundamental models of antisymmetric functions, that is, functions $f$ of the form $f(x_sigma(1), ldots, x_sigma(N))
These arise in the context of quantum chemistry, and are the basic modeling tool for wavefunctions of Fermionic systems.
arXiv Detail & Related papers (2022-08-05T16:35:24Z) - Cross-Frequency Coupling Increases Memory Capacity in Oscillatory Neural
Networks [69.42260428921436]
Cross-frequency coupling (CFC) is associated with information integration across populations of neurons.
We construct a model of CFC which predicts a computational role for observed $theta - gamma$ oscillatory circuits in the hippocampus and cortex.
We show that the presence of CFC increases the memory capacity of a population of neurons connected by plastic synapses.
arXiv Detail & Related papers (2022-04-05T17:13:36Z) - Unsupervised Semantic Segmentation by Distilling Feature Correspondences [94.73675308961944]
Unsupervised semantic segmentation aims to discover and localize semantically meaningful categories within image corpora without any form of annotation.
We present STEGO, a novel framework that distills unsupervised features into high-quality discrete semantic labels.
STEGO yields a significant improvement over the prior state of the art, on both the CocoStuff and Cityscapes challenges.
arXiv Detail & Related papers (2022-03-16T06:08:47Z) - A probabilistic latent variable model for detecting structure in binary
data [0.6767885381740952]
We introduce a novel, probabilistic binary latent variable model to detect noisy or approximate repeats of patterns in sparse binary data.
The model's capability is demonstrated by extracting structure in recordings from retinal neurons.
We apply our model to spiking responses recorded in retinal ganglion cells during stimulation with a movie.
arXiv Detail & Related papers (2022-01-26T18:37:35Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - From internal models toward metacognitive AI [0.0]
In the prefrontal cortex, a distributed executive network called the "cognitive reality monitoring network" orchestrates conscious involvement of generative-inverse model pairs.
A high responsibility signal is given to the pairs that best capture the external world.
consciousness is determined by the entropy of responsibility signals across all pairs.
arXiv Detail & Related papers (2021-09-27T05:00:56Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.