Information Topology
- URL: http://arxiv.org/abs/2210.03850v3
- Date: Mon, 13 Oct 2025 13:44:27 GMT
- Title: Information Topology
- Authors: Xin Li,
- Abstract summary: We introduce emphInformation Topology, a framework that unifies information theory and algebraic topology.<n>The starting point is the emphdot-cycle dichotomy, which separates pointwise, order-sensitive fluctuations (dots) from order-invariant, predictive structure (cycles)<n>We then define emphhomological capacity, the topological dual of Shannon capacity, as the number of independent informational cycles supported by a system.
- Score: 6.0044467881527614
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: We introduce \emph{Information Topology}: a framework that unifies information theory and algebraic topology by treating \emph{cycle closure} as the primitive operation of inference. The starting point is the \emph{dot-cycle dichotomy}, which separates pointwise, order-sensitive fluctuations (dots) from order-invariant, predictive structure (cycles). Algebraically, closure is the cancellation of boundaries ($\partial^2=0$), which converts transient histories into stable invariants. Building on this, we derive the \emph{Structure-Before-Specificity} (SbS) principle: stable information resides in nontrivial homology classes that persist under perturbations, while high-entropy contextual details act as scaffolds. The \emph{Context-Content Uncertainty Principle} (CCUP) quantifies this balance by decomposing uncertainty into contextual spread and content precision, showing why prediction requires invariance for generalization. Measure concentration onto residual invariant manifolds explains \emph{order invariance}: when mass collapses to a narrow tube around a closed cycle, reparameterizations of micro-steps leave predictive functionals unchanged. We then define \emph{homological capacity}, the topological dual of Shannon capacity, as the sustainable number of independent informational cycles supported by a system. This capacity links dynamical (KS) entropy to structural (homological) capacity and refines Euler characteristics from a ``net'' summary to a ``gross'' count of persistent invariants. Finally, we illustrate the theory across three domains where \emph{more is different}: \textbf{visual binding}, \textbf{working memory}, and \textbf{access consciousness}. Together, these results recast inference, learning, and communication as \emph{topological stabilization}: the formation, closure, and persistence of informational cycles that make prediction robust and scalable.
Related papers
- Riemannian Flow Matching for Disentangled Graph Domain Adaptation [51.98961391065951]
Graph Domain Adaptation (GDA) typically uses adversarial learning to align graph embeddings in Euclidean space.<n>DisRFM is a geometry-aware GDA framework that unifies embedding and flow-based transport.
arXiv Detail & Related papers (2026-01-31T11:05:35Z) - Random-Matrix-Induced Simplicity Bias in Over-parameterized Variational Quantum Circuits [72.0643009153473]
We show that expressive variational ansatze enter a Haar-like universality class in which both observable expectation values and parameter gradients concentrate exponentially with system size.<n>As a consequence, the hypothesis class induced by such circuits collapses with high probability to a narrow family of near-constant functions.<n>We further show that this collapse is not unavoidable: tensor-structured VQCs, including tensor-network-based and tensor-hypernetwork parameterizations, lie outside the Haar-like universality class.
arXiv Detail & Related papers (2026-01-05T08:04:33Z) - Foundations of Diffusion Models in General State Spaces: A Self-Contained Introduction [54.95522167029998]
This article is a self-contained primer on diffusion over general state spaces.<n>We develop the discrete-time view (forward noising via Markov kernels and learned reverse dynamics) alongside its continuous-time limits.<n>A common variational treatment yields the ELBO that underpins standard training losses.
arXiv Detail & Related papers (2025-12-04T18:55:36Z) - Memory-Amortized Inference: A Topological Unification of Search, Closure, and Structure [6.0044467881527614]
We propose textbfMemory-Amortized Inference (MAI), a formal framework that unifies learning and memory as phase transitions of a single geometric substrate.<n>We show that cognition operates by converting high-complexity search into low-complexity lookup.<n>This framework offers a rigorous explanation for the emergence of fast-thinking (intuition) from slow-thinking (reasoning)
arXiv Detail & Related papers (2025-11-28T16:28:24Z) - Cycle is All You Need: More Is Different [6.0044467881527614]
We propose an information-topological framework in which cycle closure is the fundamental mechanism of memory and consciousness.<n>We show that memory is not a static store but the ability to re-enter latent cycles in neural state space.<n>We conclude that cycle is all you need: persistent invariants enable generalization in non-ergodic environments.
arXiv Detail & Related papers (2025-09-15T21:48:30Z) - Time Symmetry, Retrocausality, and Emergent Collapse: The Tlalpan Interpretation of Quantum Mechanics [51.56484100374058]
The Tlalpan Interpretation (QTI) proposes that the wavefunction collapse is not a primitive, axiomatic rule but an emergent phenomenon.<n>The novelty of QTI lies in its embedding of collapse within the conceptual language of critical phenomena in statistical physics.
arXiv Detail & Related papers (2025-08-25T20:30:56Z) - Memory as Structured Trajectories: Persistent Homology and Contextual Sheaves [5.234742752529437]
We introduce the delta-homology analogy, which formalizes memory as a set of sparse, topologically irreducible attractors.<n>A Dirac delta-like memory trace is identified with a nontrivial homology generator on a latent manifold of cognitive states.<n>We interpret these delta-homology generators as the low-entropy content variable, while the high-entropy context variable is represented dually as a filtration, cohomology class, or sheaf.
arXiv Detail & Related papers (2025-08-01T23:03:13Z) - Sequential-Parallel Duality in Prefix Scannable Models [68.39855814099997]
Recent developments have given rise to various models, such as Gated Linear Attention (GLA) and Mamba.<n>This raises a natural question: can we characterize the full class of neural sequence models that support near-constant-time parallel evaluation and linear-time, constant-space sequential inference?
arXiv Detail & Related papers (2025-06-12T17:32:02Z) - Spectral Architecture Search for Neural Network Models [0.0]
We present a novel architecture search protocol which exploits the spectral attributes of the inter-layer transfer matrices.<n>We show that the newly proposed method yields a self-emerging architecture with a minimal degree of expressivity to handle the task under investigation.
arXiv Detail & Related papers (2025-04-01T15:14:30Z) - Topological Deep Learning with State-Space Models: A Mamba Approach for Simplicial Complexes [4.787059527893628]
We propose a novel architecture designed to operate with simplicial complexes, utilizing the Mamba state-space model as its backbone.
Our approach generates sequences for the nodes based on the neighboring cells, enabling direct communication between all higher-order structures, regardless of their rank.
arXiv Detail & Related papers (2024-09-18T14:49:25Z) - Learnable & Interpretable Model Combination in Dynamic Systems Modeling [0.0]
We discuss which types of models are usually combined and propose a model interface that is capable of expressing a variety of mixed equation based models.
We propose a new wildcard topology, that is capable of describing the generic connection between two combined models in an easy to interpret fashion.
The contributions of this paper are highlighted at a proof of concept: Different connection topologies between two models are learned, interpreted and compared.
arXiv Detail & Related papers (2024-06-12T11:17:11Z) - A Fixed-Point Approach for Causal Generative Modeling [20.88890689294816]
We propose a novel formalism for describing Structural Causal Models (SCMs) as fixed-point problems on causally ordered variables.
We establish the weakest known conditions for their unique recovery given the topological ordering (TO)
arXiv Detail & Related papers (2024-04-10T12:29:05Z) - What is Intelligence? A Cycle Closure Perspective [6.0044467881527614]
We argue for a structural-dynamical account rooted in a topological closure law.<n>We show that textbfMemory-Amortized Inference (MAI) is the computational mechanism that implements SbS,$rightarrow$,CCUP through dual bootstrapping.
arXiv Detail & Related papers (2024-04-08T13:06:23Z) - Causal Layering via Conditional Entropy [85.01590667411956]
Causal discovery aims to recover information about an unobserved causal graph from the observable data it generates.
We provide ways to recover layerings of a graph by accessing the data via a conditional entropy oracle.
arXiv Detail & Related papers (2024-01-19T05:18:28Z) - What is Memory? A Homological Perspective [6.0044467881527614]
We introduce the delta-homology model of memory, in which recall, learning, and prediction emerge from cycle closure.<n>A Dirac-like memory trace corresponds to a nontrivial homology generator, representing a sparse, irreducible attractor.<n>We formalize this mechanism through the Context-Content Uncertainty Principle (CCUP), which states that cognition minimizes joint uncertainty between a high-entropy context variable and a low-entropy content variable.
arXiv Detail & Related papers (2023-03-07T19:47:01Z) - DIFFormer: Scalable (Graph) Transformers Induced by Energy Constrained
Diffusion [66.21290235237808]
We introduce an energy constrained diffusion model which encodes a batch of instances from a dataset into evolutionary states.
We provide rigorous theory that implies closed-form optimal estimates for the pairwise diffusion strength among arbitrary instance pairs.
Experiments highlight the wide applicability of our model as a general-purpose encoder backbone with superior performance in various tasks.
arXiv Detail & Related papers (2023-01-23T15:18:54Z) - Low-Rank Constraints for Fast Inference in Structured Models [110.38427965904266]
This work demonstrates a simple approach to reduce the computational and memory complexity of a large class of structured models.
Experiments with neural parameterized structured models for language modeling, polyphonic music modeling, unsupervised grammar induction, and video modeling show that our approach matches the accuracy of standard models at large state spaces.
arXiv Detail & Related papers (2022-01-08T00:47:50Z) - Dist2Cycle: A Simplicial Neural Network for Homology Localization [66.15805004725809]
Simplicial complexes can be viewed as high dimensional generalizations of graphs that explicitly encode multi-way ordered relations.
We propose a graph convolutional model for learning functions parametrized by the $k$-homological features of simplicial complexes.
arXiv Detail & Related papers (2021-10-28T14:59:41Z) - Redefining Neural Architecture Search of Heterogeneous Multi-Network
Models by Characterizing Variation Operators and Model Components [71.03032589756434]
We investigate the effect of different variation operators in a complex domain, that of multi-network heterogeneous neural models.
We characterize both the variation operators, according to their effect on the complexity and performance of the model; and the models, relying on diverse metrics which estimate the quality of the different parts composing it.
arXiv Detail & Related papers (2021-06-16T17:12:26Z) - Variational Causal Networks: Approximate Bayesian Inference over Causal
Structures [132.74509389517203]
We introduce a parametric variational family modelled by an autoregressive distribution over the space of discrete DAGs.
In experiments, we demonstrate that the proposed variational posterior is able to provide a good approximation of the true posterior.
arXiv Detail & Related papers (2021-06-14T17:52:49Z) - Structured Reordering for Modeling Latent Alignments in Sequence
Transduction [86.94309120789396]
We present an efficient dynamic programming algorithm performing exact marginal inference of separable permutations.
The resulting seq2seq model exhibits better systematic generalization than standard models on synthetic problems and NLP tasks.
arXiv Detail & Related papers (2021-06-06T21:53:54Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Generalising Recursive Neural Models by Tensor Decomposition [12.069862650316262]
We introduce a general approach to model aggregation of structural context leveraging a tensor-based formulation.
We show how the exponential growth in the size of the parameter space can be controlled through an approximation based on the Tucker decomposition.
By this means, we can effectively regulate the trade-off between expressivity of the encoding, controlled by the hidden size, computational complexity and model generalisation.
arXiv Detail & Related papers (2020-06-17T17:28:19Z) - Collegial Ensembles [11.64359837358763]
We show that collegial ensembles can be efficiently implemented in practical architectures using group convolutions and block diagonal layers.
We also show how our framework can be used to analytically derive optimal group convolution modules without having to train a single model.
arXiv Detail & Related papers (2020-06-13T16:40:26Z) - Segmentation and Recovery of Superquadric Models using Convolutional
Neural Networks [2.454342521577328]
We present a (two-stage) approach built around convolutional neural networks (CNNs)
In the first stage, our approach uses a Mask RCNN model to identify superquadric-like structures in depth scenes.
We are able to describe complex structures with a small number of interpretable parameters.
arXiv Detail & Related papers (2020-01-28T18:17:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.