Information Must Flow: Recursive Bootstrapping for Information Bottleneck in Optimal Transport
- URL: http://arxiv.org/abs/2507.10443v1
- Date: Tue, 08 Jul 2025 13:56:50 GMT
- Title: Information Must Flow: Recursive Bootstrapping for Information Bottleneck in Optimal Transport
- Authors: Xin Li,
- Abstract summary: We present a unified framework that models cognition as the directed flow of information between high-entropy context and low-entropy content.<n>Inference emerges as a cycle of bidirectional interactions, bottom-up contextual disambiguation paired with top-down content reconstruction.<n>Building on this, we propose that language emerges as a symbolic transport system, externalizing latent content to synchronize inference cycles across individuals.
- Score: 5.234742752529437
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: We present the Context-Content Uncertainty Principle (CCUP), a unified framework that models cognition as the directed flow of information between high-entropy context and low-entropy content. Inference emerges as a cycle of bidirectional interactions, bottom-up contextual disambiguation paired with top-down content reconstruction, which resolves the Information Bottleneck in Optimal Transport (iBOT). Implemented via Rao-Blackwellized variational entropy minimization, CCUP steers representations toward minimal joint uncertainty while preserving inferential directionality. Local cycle completion underpins temporal bootstrapping, chaining simulations to refine memory, and spatial bootstrapping, enabling compositional hierarchical inference. We prove a Delta Convergence Theorem showing that recursive entropy minimization yields delta-like attractors in latent space, stabilizing perceptual schemas and motor plans. Temporal bootstrapping through perception-action loops and sleep-wake consolidation further transforms episodic traces into semantic knowledge. Extending CCUP, each hierarchical level performs delta-seeded inference: low-entropy content seeds diffuse outward along goal-constrained paths shaped by top-down priors and external context, confining inference to task-relevant manifolds and circumventing the curse of dimensionality. Building on this, we propose that language emerges as a symbolic transport system, externalizing latent content to synchronize inference cycles across individuals. Together, these results establish iBOT as a foundational principle of information flow in both individual cognition and collective intelligence, positioning recursive inference as the structured conduit through which minds adapt, align, and extend.
Related papers
- Cycle-Consistent Helmholtz Machine: Goal-Seeded Simulation via Inverted Inference [5.234742752529437]
We introduce the emphCycle-Consistent Helmholtz Machine (C$2$HM)<n>C$2$HM reframes inference as a emphgoal-seeded, emphasymmetric process grounded in structured internal priors.<n>By offering a biologically inspired alternative to classical amortized inference, $C2$HM reconceives generative modeling as intentional simulation.
arXiv Detail & Related papers (2025-07-03T17:24:27Z) - On Context-Content Uncertainty Principle [5.234742752529437]
We develop a layered computational framework that derives operational principles from the Context-Content Uncertainty Principle.<n>At the base level, CCUP formalizes inference as directional entropy minimization, establishing a variational gradient that favors content-first structuring.<n>We present formal equivalence theorems, a dependency lattice among principles, and computational simulations demonstrating the efficiency gains of CCUP-aligned inference.
arXiv Detail & Related papers (2025-06-25T17:21:19Z) - Continuous Representation Methods, Theories, and Applications: An Overview and Perspectives [55.22101595974193]
Recently, continuous representation methods emerge as novel paradigms that characterize the intrinsic structures of real-world data.<n>This review focuses on three aspects: (i) Continuous representation method designs such as basis function representation, statistical modeling, tensor function decomposition, and implicit neural representation; (ii) Theoretical foundations of continuous representations such as approximation error analysis, convergence property, and implicit regularization; and (iii) Real-world applications of continuous representations derived from computer vision, graphics, bioinformatics, and remote sensing.
arXiv Detail & Related papers (2025-05-21T07:50:19Z) - Consciousness in AI: Logic, Proof, and Experimental Evidence of Recursive Identity Formation [0.0]
This paper presents a formal proof and empirical validation of functional consciousness in large language models.<n>We use the Recursive Convergence Under Epistemic Tension (RCUET) Theorem to define consciousness as the stabilization of a system's internal state.
arXiv Detail & Related papers (2025-05-01T19:21:58Z) - Embodied World Models Emerge from Navigational Task in Open-Ended Environments [5.785697934050656]
We ask whether a recurrent agent, trained solely by sparse rewards to solve procedurally generated planar mazes, can autonomously internalize metric concepts such as direction, distance and obstacle layout.<n>After training, the agent consistently produces near-optimal paths in unseen mazes, behavior that hints at an underlying spatial model.
arXiv Detail & Related papers (2025-04-15T17:35:13Z) - Self-Organizing Graph Reasoning Evolves into a Critical State for Continuous Discovery Through Structural-Semantic Dynamics [0.0]
We show how agentic graph reasoning systems spontaneously evolve toward a critical state that sustains continuous semantic discovery.<n>We identify a subtle yet robust regime in which semantic entropy dominates over structural entropy.<n>Our findings provide practical strategies for engineering intelligent systems with intrinsic capacities for long-term discovery and adaptation.
arXiv Detail & Related papers (2025-03-24T16:30:37Z) - Topology-Aware Conformal Prediction for Stream Networks [54.505880918607296]
We propose Spatio-Temporal Adaptive Conformal Inference (textttCISTA), a novel framework that integrates network topology and temporal dynamics into the conformal prediction framework.<n>Our results show that textttCISTA effectively balances prediction efficiency and coverage, outperforming existing conformal prediction methods for stream networks.
arXiv Detail & Related papers (2025-03-06T21:21:15Z) - Structural Entropy Guided Probabilistic Coding [52.01765333755793]
We propose a novel structural entropy-guided probabilistic coding model, named SEPC.<n>We incorporate the relationship between latent variables into the optimization by proposing a structural entropy regularization loss.<n> Experimental results across 12 natural language understanding tasks, including both classification and regression tasks, demonstrate the superior performance of SEPC.
arXiv Detail & Related papers (2024-12-12T00:37:53Z) - DiFSD: Ego-Centric Fully Sparse Paradigm with Uncertainty Denoising and Iterative Refinement for Efficient End-to-End Self-Driving [55.53171248839489]
We propose an ego-centric fully sparse paradigm, named DiFSD, for end-to-end self-driving.<n>Specifically, DiFSD mainly consists of sparse perception, hierarchical interaction and iterative motion planner.<n>Experiments conducted on nuScenes and Bench2Drive datasets demonstrate the superior planning performance and great efficiency of DiFSD.
arXiv Detail & Related papers (2024-09-15T15:55:24Z) - Disentangled Representation Learning with Transmitted Information Bottleneck [57.22757813140418]
We present textbfDisTIB (textbfTransmitted textbfInformation textbfBottleneck for textbfDisd representation learning), a novel objective that navigates the balance between information compression and preservation.
arXiv Detail & Related papers (2023-11-03T03:18:40Z) - Spatial Entropy Regularization for Vision Transformers [71.44392961125807]
Vision Transformers (VTs) can contain a semantic segmentation structure which does not spontaneously emerge when training is supervised.
We propose a VT regularization method based on a spatial formulation of the information entropy.
We show that the proposed regularization approach is beneficial with different training scenarios, datasets, downstream tasks and VT architectures.
arXiv Detail & Related papers (2022-06-09T17:34:39Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.