The Representational Geometry of Number
- URL: http://arxiv.org/abs/2602.06843v1
- Date: Fri, 06 Feb 2026 16:35:22 GMT
- Title: The Representational Geometry of Number
- Authors: Zhimin Hu, Lanhao Niu, Sashank Varma,
- Abstract summary: We show that number representations preserve a stable relational structure across tasks.<n>We find that task-specific representations are embedded in distinct subspaces, with low-level features like magnitude encoded along separable linear directions.<n>It suggests that understanding arises when task-specific transformations are applied to a shared underlying relational structure of conceptual representations.
- Score: 1.5994376682356057
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A central question in cognitive science is whether conceptual representations converge onto a shared manifold to support generalization, or diverge into orthogonal subspaces to minimize task interference. While prior work has discovered evidence for both, a mechanistic account of how these properties coexist and transform across tasks remains elusive. We propose that representational sharing lies not in the concepts themselves, but in the geometric relations between them. Using number concepts as a testbed and language models as high-dimensional computational substrates, we show that number representations preserve a stable relational structure across tasks. Task-specific representations are embedded in distinct subspaces, with low-level features like magnitude and parity encoded along separable linear directions. Crucially, we find that these subspaces are largely transformable into one another via linear mappings, indicating that representations share relational structure despite being located in distinct subspaces. Together, these results provide a mechanistic lens of how language models balance the shared structure of number representation with functional flexibility. It suggests that understanding arises when task-specific transformations are applied to a shared underlying relational structure of conceptual representations.
Related papers
- Native Logical and Hierarchical Representations with Subspace Embeddings [25.274936769664098]
We introduce a novel paradigm: embedding concepts as linear subspaces.<n>It naturally supports set-theoretic operations like intersection (conjunction) and linear sum (disjunction)<n>Our method achieves state-of-the-art results in reconstruction and link prediction on WordNet.
arXiv Detail & Related papers (2025-08-21T18:29:17Z) - The Origins of Representation Manifolds in Large Language Models [52.68554895844062]
We show that cosine similarity in representation space may encode the intrinsic geometry of a feature through shortest, on-manifold paths.<n>The critical assumptions and predictions of the theory are validated on text embeddings and token activations of large language models.
arXiv Detail & Related papers (2025-05-23T13:31:22Z) - Directional Non-Commutative Monoidal Structures for Compositional Embeddings in Machine Learning [0.0]
We introduce a new structure for compositional embeddings built on directional non-commutative monoidal operators.<n>Our construction defines a distinct composition operator circ_i for each axis i, ensuring associative combination along each axis without imposing global commutativity.<n>All axis-specific operators commute with one another, enforcing a global interchange law that enables consistent crossaxis compositions.
arXiv Detail & Related papers (2025-05-21T13:27:14Z) - Aligning Instance-Semantic Sparse Representation towards Unsupervised Object Segmentation and Shape Abstraction with Repeatable Primitives [48.155145581663724]
Understanding 3D object shapes requires shape representation by object parts abstracted from results of instance and semantic segmentation.<n>We introduce a one-stage, fully unsupervised framework towards semantic-aware shape representation.<n>This framework produces joint instance segmentation, semantic segmentation, and shape abstraction through sparse representation and feature alignment of object parts in a high-dimensional space.
arXiv Detail & Related papers (2025-03-10T05:52:17Z) - Geometric Relational Embeddings [19.383110247906256]
We propose relational embeddings, a paradigm of embeddings that respect the underlying symbolic structures.
Results obtained from benchmark real-world datasets demonstrate the efficacy of geometric relational embeddings.
arXiv Detail & Related papers (2024-09-18T22:02:24Z) - Compositional Structures in Neural Embedding and Interaction Decompositions [101.40245125955306]
We describe a basic correspondence between linear algebraic structures within vector embeddings in artificial neural networks.
We introduce a characterization of compositional structures in terms of "interaction decompositions"
We establish necessary and sufficient conditions for the presence of such structures within the representations of a model.
arXiv Detail & Related papers (2024-07-12T02:39:50Z) - Latent Functional Maps: a spectral framework for representation alignment [34.20582953800544]
We introduce a multi-purpose framework to the representation learning community, which allows to: (i) compare different spaces in an interpretable way and measure their intrinsic similarity; (ii) find correspondences between them, both in unsupervised and weakly supervised settings, and (iii) to effectively transfer representations between distinct spaces.<n>We validate our framework on various applications, ranging from stitching to retrieval tasks, and on multiple modalities, demonstrating that Latent Functional Maps can serve as a swiss-army knife for representation alignment.
arXiv Detail & Related papers (2024-06-20T10:43:28Z) - Linear Spaces of Meanings: Compositional Structures in Vision-Language
Models [110.00434385712786]
We investigate compositional structures in data embeddings from pre-trained vision-language models (VLMs)
We first present a framework for understanding compositional structures from a geometric perspective.
We then explain what these structures entail probabilistically in the case of VLM embeddings, providing intuitions for why they arise in practice.
arXiv Detail & Related papers (2023-02-28T08:11:56Z) - On Neural Architecture Inductive Biases for Relational Tasks [76.18938462270503]
We introduce a simple architecture based on similarity-distribution scores which we name Compositional Network generalization (CoRelNet)
We find that simple architectural choices can outperform existing models in out-of-distribution generalizations.
arXiv Detail & Related papers (2022-06-09T16:24:01Z) - Image Synthesis via Semantic Composition [74.68191130898805]
We present a novel approach to synthesize realistic images based on their semantic layouts.
It hypothesizes that for objects with similar appearance, they share similar representation.
Our method establishes dependencies between regions according to their appearance correlation, yielding both spatially variant and associated representations.
arXiv Detail & Related papers (2021-09-15T02:26:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.