Beyond Optimization: Intelligence as Metric-Topology Factorization under Geometric Incompleteness
- URL: http://arxiv.org/abs/2602.07974v1
- Date: Sun, 08 Feb 2026 13:59:22 GMT
- Title: Beyond Optimization: Intelligence as Metric-Topology Factorization under Geometric Incompleteness
- Authors: Xin Li,
- Abstract summary: We argue that intelligence is not navigation through a fixed maze, but the ability to reshape representational geometry so desired behaviors become stable attractors.<n>We show any fixed metric is geometrically incomplete: for any local metric representation, some topological transformations make it singular or incoherent.<n>We introduce the Topological Urysohn Machine (TUM), implementing MTF through memory-amortized metric inference.
- Score: 6.0044467881527614
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Contemporary ML often equates intelligence with optimization: searching for solutions within a fixed representational geometry. This works in static regimes but breaks under distributional shift, task permutation, and continual learning, where even mild topological changes can invalidate learned solutions and trigger catastrophic forgetting. We propose Metric-Topology Factorization (MTF) as a unifying geometric principle: intelligence is not navigation through a fixed maze, but the ability to reshape representational geometry so desired behaviors become stable attractors. Learning corresponds to metric contraction (a controlled deformation of Riemannian structure), while task identity and environmental variation are encoded topologically and stored separately in memory. We show any fixed metric is geometrically incomplete: for any local metric representation, some topological transformations make it singular or incoherent, implying an unavoidable stability-plasticity tradeoff for weight-based systems. MTF resolves this by factorizing stable topology from plastic metric warps, enabling rapid adaptation via geometric switching rather than re-optimization. Building on this, we introduce the Topological Urysohn Machine (TUM), implementing MTF through memory-amortized metric inference (MAMI): spectral task signatures index amortized metric transformations, letting a single learned geometry be reused across permuted, reflected, or parity-altered environments. This explains robustness to task reordering, resistance to catastrophic forgetting, and generalization across transformations that defeat conventional continual learning methods (e.g., EWC).
Related papers
- ArGEnT: Arbitrary Geometry-encoded Transformer for Operator Learning [2.757490632589873]
We propose Arbitrary Geometry-encoded Transformer (ArGEnT), a geometry-aware attention-based architecture for operator learning on arbitrary domains.<n>By combining flexible geometry encoding with operator-learning capabilities, ArGEnT provides a scalable surrogate modeling framework for optimization, uncertainty, and data-driven modeling of complex physical systems.
arXiv Detail & Related papers (2026-02-12T06:22:59Z) - Discrete Solution Operator Learning for Geometry-Dependent PDEs [5.010936781094744]
We introduce DiSOL, a paradigm that learns discrete solution procedures rather than continuous function-space operators.<n>DiSOL factorizes the solver into learnable stages that mirror classical discretizations.<n>These results highlight the need for operator representations in geometry-dominated problems.
arXiv Detail & Related papers (2026-01-14T04:34:49Z) - Learning Geometry: A Framework for Building Adaptive Manifold Models through Metric Optimization [8.201374511929538]
This paper proposes a novel paradigm for machine learning that moves beyond traditional parameter optimization.<n>We optimize the metric tensor field on a manifold with a predefined topology, thereby dynamically shaping the geometric structure of the model space.<n>This work lays a solid foundation for constructing fully dynamic "meta-learners" capable of autonomously evolving their geometry and topology.
arXiv Detail & Related papers (2025-10-30T01:53:32Z) - The Neural Differential Manifold: An Architecture with Explicit Geometric Structure [8.201374511929538]
This paper introduces the Neural Differential Manifold (NDM), a novel neural network architecture that explicitly incorporates geometric structure into its fundamental design.<n>We analyze the theoretical advantages of this approach, including its potential for more efficient optimization, enhanced continual learning, and applications in scientific discovery and controllable generative modeling.
arXiv Detail & Related papers (2025-10-29T02:24:27Z) - Geometrically Constrained and Token-Based Probabilistic Spatial Transformers [5.437226012505534]
We revisit Spatial Transformer Networks (STNs) as a canonicalization tool for transformer-based vision pipelines.<n>We propose a probabilistic, component-wise extension that improves robustness.<n> Experiments on challenging moth classification benchmarks demonstrate that our method consistently improves robustness compared to other STNs.
arXiv Detail & Related papers (2025-09-14T11:30:53Z) - Generalized Linear Mode Connectivity for Transformers [87.32299363530996]
A striking phenomenon is linear mode connectivity (LMC), where independently trained models can be connected by low- or zero-loss paths.<n>Prior work has predominantly focused on neuron re-ordering through permutations, but such approaches are limited in scope.<n>We introduce a unified framework that captures four symmetry classes: permutations, semi-permutations, transformations, and general invertible maps.<n>This generalization enables, for the first time, the discovery of low- and zero-barrier linear paths between independently trained Vision Transformers and GPT-2 models.
arXiv Detail & Related papers (2025-06-28T01:46:36Z) - CP$^2$: Leveraging Geometry for Conformal Prediction via Canonicalization [51.716834831684004]
We study the problem of conformal prediction (CP) under geometric data shifts.<n>We propose integrating geometric information--such as geometric pose--into the conformal procedure to reinstate its guarantees.
arXiv Detail & Related papers (2025-06-19T10:12:02Z) - Fully Geometric Multi-Hop Reasoning on Knowledge Graphs with Transitive Relations [50.05281461410368]
We introduce GeometrE, a geometric embedding method for multi-hop reasoning.<n>It does not require learning the logical operations and enables full geometric interpretability.<n>Our experiments show that GeometrE outperforms current state-of-the-art methods on standard benchmark datasets.
arXiv Detail & Related papers (2025-05-18T11:17:50Z) - Bridging Geometric States via Geometric Diffusion Bridge [79.60212414973002]
We introduce the Geometric Diffusion Bridge (GDB), a novel generative modeling framework that accurately bridges initial and target geometric states.
GDB employs an equivariant diffusion bridge derived by a modified version of Doob's $h$-transform for connecting geometric states.
We show that GDB surpasses existing state-of-the-art approaches, opening up a new pathway for accurately bridging geometric states.
arXiv Detail & Related papers (2024-10-31T17:59:53Z) - PseudoNeg-MAE: Self-Supervised Point Cloud Learning using Conditional Pseudo-Negative Embeddings [55.55445978692678]
PseudoNeg-MAE enhances global feature representation of point cloud masked autoencoders by making them both discriminative and sensitive to transformations.<n>We propose a novel loss that explicitly penalizes invariant collapse, enabling the network to capture richer transformation cues while preserving discriminative representations.
arXiv Detail & Related papers (2024-09-24T07:57:21Z) - ResNet-LDDMM: Advancing the LDDMM Framework Using Deep Residual Networks [86.37110868126548]
In this work, we make use of deep residual neural networks to solve the non-stationary ODE (flow equation) based on a Euler's discretization scheme.
We illustrate these ideas on diverse registration problems of 3D shapes under complex topology-preserving transformations.
arXiv Detail & Related papers (2021-02-16T04:07:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.