Compositional Symmetry as Compression: Lie Pseudogroup Structure in Algorithmic Agents
- URL: http://arxiv.org/abs/2510.10586v1
- Date: Sun, 12 Oct 2025 13:06:37 GMT
- Title: Compositional Symmetry as Compression: Lie Pseudogroup Structure in Algorithmic Agents
- Authors: Giulio Ruffini,
- Abstract summary: In the (Kolmogorov) view, agents are programs that track and compress sensory streams using generative programs.<n>We propose a framework where the relevant structural prior is simplicity (off) understood as emphSolomonal symmetry<n>We show that accurate world-tracking imposes (i) emphdynamic constraints -- and (ii) emphdynamic constraints under static inputs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In the algorithmic (Kolmogorov) view, agents are programs that track and compress sensory streams using generative programs. We propose a framework where the relevant structural prior is simplicity (Solomonoff) understood as \emph{compositional symmetry}: natural streams are well described by (local) actions of finite-parameter Lie pseudogroups on geometrically and topologically complex low-dimensional configuration manifolds (latent spaces). Modeling the agent as a generic neural dynamical system coupled to such streams, we show that accurate world-tracking imposes (i) \emph{structural constraints} -- equivariance of the agent's constitutive equations and readouts -- and (ii) \emph{dynamical constraints}: under static inputs, symmetry induces conserved quantities (Noether-style labels) in the agent dynamics and confines trajectories to reduced invariant manifolds; under slow drift, these manifolds move but remain low-dimensional. This yields a hierarchy of reduced manifolds aligned with the compositional factorization of the pseudogroup, providing a geometric account of the ``blessing of compositionality'' in deep models. We connect these ideas to the Spencer formalism for Lie pseudogroups and formulate a symmetry-based, self-contained version of predictive coding in which higher layers receive only \emph{coarse-grained residual transformations} (prediction-error coordinates) along symmetry directions unresolved at lower layers.
Related papers
- Toward Manifest Relationality in Transformers via Symmetry Reduction [0.0]
Transformer models contain substantial internal redundancy.<n>Recent approaches address this by explicitly breaking symmetry.<n>We propose a complementary framework based on symmetry reduction.
arXiv Detail & Related papers (2026-02-21T19:43:17Z) - Rethinking Diffusion Models with Symmetries through Canonicalization with Applications to Molecular Graph Generation [56.361076943802594]
CanonFlow achieves state-of-the-art performance on the challenging GEOM-DRUG dataset, and the advantage remains large in few-step generation.
arXiv Detail & Related papers (2026-02-16T18:58:55Z) - Merge on workspaces as Hopf algebra Markov chain [0.0]
We study the dynamical properties of a Hopf algebra Markov chain with state space binary rooted forests with labelled leaves.<n>This Markovian dynamical system describes the core computational process of structure formation and transformation in syntax via the Merge operation.
arXiv Detail & Related papers (2025-12-21T19:26:41Z) - Dynamics of Agentic Loops in Large Language Models: A Geometric Theory of Trajectories [0.0]
This paper introduces a geometric framework for analyzing agentic trajectories in semantic embedding space.<n>Because cosine similarity is biased by embedding anisotropy, we introduce an isotonic calibration.<n>This enables rigorous measurement of trajectories, clusters and attractors.
arXiv Detail & Related papers (2025-12-11T07:06:14Z) - Categorical Equivariant Deep Learning: Category-Equivariant Neural Networks and Universal Approximation Theorems [0.0]
We develop a theory of category-equivariant neural networks (CENNs)<n>CENNs unifies group/groupoid-equivariant networks, poset/lattice-equivariant networks, graph and sheaf neural networks.<n>We instantiate the framework for groups/groupoids, posets/lattices, graphs and cellular sheaves.
arXiv Detail & Related papers (2025-11-23T12:07:45Z) - Information Loss and Cost in Symmetry Breaking [0.0]
We develop an information-theoretic framework to characterize symmetry breaking in two spatial dimensions.<n>This perspective naturally connects to the description of anyon condensation in topological phases of matter.<n>Our results forge new connections between operator algebras, tensor category theory, and quantum information in the study of generalized symmetries.
arXiv Detail & Related papers (2025-09-29T11:34:31Z) - Why Neural Network Can Discover Symbolic Structures with Gradient-based Training: An Algebraic and Geometric Foundation for Neurosymbolic Reasoning [73.18052192964349]
We develop a theoretical framework that explains how discrete symbolic structures can emerge naturally from continuous neural network training dynamics.<n>By lifting neural parameters to a measure space and modeling training as Wasserstein gradient flow, we show that under geometric constraints, the parameter measure $mu_t$ undergoes two concurrent phenomena.
arXiv Detail & Related papers (2025-06-26T22:40:30Z) - Relative Representations: Topological and Geometric Perspectives [50.85040046976025]
Relative representations are an established approach to zero-shot model stitching.<n>We introduce a normalization procedure in the relative transformation, resulting in invariance to non-isotropic rescalings and permutations.<n>Second, we propose to deploy topological densification when fine-tuning relative representations, a topological regularization loss encouraging clustering within classes.
arXiv Detail & Related papers (2024-09-17T08:09:22Z) - Geometric Neural Diffusion Processes [55.891428654434634]
We extend the framework of diffusion models to incorporate a series of geometric priors in infinite-dimension modelling.
We show that with these conditions, the generative functional model admits the same symmetry.
arXiv Detail & Related papers (2023-07-11T16:51:38Z) - Optimization Dynamics of Equivariant and Augmented Neural Networks [2.7918308693131135]
We investigate the optimization of neural networks on symmetric data.
We compare the strategy of constraining the architecture to be equivariant to that of using data augmentation.
Our analysis reveals that even in the latter situation, stationary points may be unstable for augmented training although they are stable for the manifestly equivariant models.
arXiv Detail & Related papers (2023-03-23T17:26:12Z) - The Dynamics of Riemannian Robbins-Monro Algorithms [101.29301565229265]
We propose a family of Riemannian algorithms generalizing and extending the seminal approximation framework of Robbins and Monro.
Compared to their Euclidean counterparts, Riemannian algorithms are much less understood due to lack of a global linear structure on the manifold.
We provide a general template of almost sure convergence results that mirrors and extends the existing theory for Euclidean Robbins-Monro schemes.
arXiv Detail & Related papers (2022-06-14T12:30:11Z) - A Unifying and Canonical Description of Measure-Preserving Diffusions [60.59592461429012]
A complete recipe of measure-preserving diffusions in Euclidean space was recently derived unifying several MCMC algorithms into a single framework.
We develop a geometric theory that improves and generalises this construction to any manifold.
arXiv Detail & Related papers (2021-05-06T17:36:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.