If you can distinguish, you can express: Galois theory, Stone--Weierstrass, machine learning, and linguistics
- URL: http://arxiv.org/abs/2510.09902v1
- Date: Fri, 10 Oct 2025 22:26:57 GMT
- Title: If you can distinguish, you can express: Galois theory, Stone--Weierstrass, machine learning, and linguistics
- Authors: Ben Blum-Smith, Claudia Brugman, Thomas Conners, Soledad Villar,
- Abstract summary: We provide an elementary theorem connecting the relevant notions of "distinguishing power"<n>We discuss machine learning and data science contexts in which these theorems, and more generally the theme of links between distinguishing power and expressive power, appear.
- Score: 8.34369101796875
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This essay develops a parallel between the Fundamental Theorem of Galois Theory and the Stone--Weierstrass theorem: both can be viewed as assertions that tie the distinguishing power of a class of objects to their expressive power. We provide an elementary theorem connecting the relevant notions of "distinguishing power". We also discuss machine learning and data science contexts in which these theorems, and more generally the theme of links between distinguishing power and expressive power, appear. Finally, we discuss the same theme in the context of linguistics, where it appears as a foundational principle, and illustrate it with several examples.
Related papers
- Language as Mathematical Structure: Examining Semantic Field Theory Against Language Games [0.0]
Large language models (LLMs) offer a new empirical setting in which long-standing theories of linguistic meaning can be examined.<n>We formalize the notions of lexical fields (Lexfelder) and linguistic fields (Lingofelder) as interacting structures in a continuous semantic space.<n>We argue that the success of LLMs in capturing semantic regularities supports the view that language exhibits an underlying mathematical structure.
arXiv Detail & Related papers (2026-01-01T19:15:17Z) - A Formal Framework for the Definition of 'State': Hierarchical Representation and Meta-Universe Interpretation [0.0]
This study introduces a mathematically rigorous and unified formal structure for the concept of'state'<n>First, a 'hierarchical state grid' composed of two axes--state depth and mapping hierarchy--is proposed to provide a unified notational system applicable across mathematical, physical, and linguistic domains.
arXiv Detail & Related papers (2025-07-14T11:37:35Z) - DeepTheorem: Advancing LLM Reasoning for Theorem Proving Through Natural Language and Reinforcement Learning [67.93945726549289]
DeepTheorem is a comprehensive informal theorem-proving framework exploiting natural language to enhance mathematical reasoning.<n>DeepTheorem includes a large-scale benchmark dataset consisting of 121K high-quality IMO-level informal theorems and proofs.<n>We devise a novel reinforcement learning strategy (RL-Zero) explicitly tailored to informal theorem proving, leveraging the verified theorem variants to incentivize robust mathematical inference.
arXiv Detail & Related papers (2025-05-29T17:59:39Z) - A Complexity-Based Theory of Compositionality [53.025566128892066]
In AI, compositional representations can enable a powerful form of out-of-distribution generalization.<n>Here, we propose a definition, which we call representational compositionality, that accounts for and extends our intuitions about compositionality.<n>We show how it unifies disparate intuitions from across the literature in both AI and cognitive science.
arXiv Detail & Related papers (2024-10-18T18:37:27Z) - Pregeometry, Formal Language and Constructivist Foundations of Physics [0.0]
We discuss the metaphysics of pregeometric structures, upon which new and existing notions of quantum geometry may find a foundation.
We draw attention to evidence suggesting that the framework of formal language, in particular, homotopy type theory, provides the conceptual building blocks for a theory of pregeometry.
arXiv Detail & Related papers (2023-11-07T13:19:29Z) - A Category-theoretical Meta-analysis of Definitions of Disentanglement [97.34033555407403]
Disentangling the factors of variation in data is a fundamental concept in machine learning.
This paper presents a meta-analysis of existing definitions of disentanglement.
arXiv Detail & Related papers (2023-05-11T15:24:20Z) - How Do Transformers Learn Topic Structure: Towards a Mechanistic
Understanding [56.222097640468306]
We provide mechanistic understanding of how transformers learn "semantic structure"
We show, through a combination of mathematical analysis and experiments on Wikipedia data, that the embedding layer and the self-attention layer encode the topical structure.
arXiv Detail & Related papers (2023-03-07T21:42:17Z) - Generalization-baed similarity [0.0]
We develop an abstract notion of similarity based on the observation that sets of generalizations encode important properties of elements.<n>We show that similarity defined in this way has appealing mathematical properties.<n>We sketch some potential applications to theoretical computer science and artificial intelligence.
arXiv Detail & Related papers (2023-02-13T14:48:59Z) - HyperMiner: Topic Taxonomy Mining with Hyperbolic Embedding [54.52651110749165]
We present a novel framework that introduces hyperbolic embeddings to represent words and topics.
With the tree-likeness property of hyperbolic space, the underlying semantic hierarchy can be better exploited to mine more interpretable topics.
arXiv Detail & Related papers (2022-10-16T02:54:17Z) - Exploring Discourse Structures for Argument Impact Classification [48.909640432326654]
This paper empirically shows that the discourse relations between two arguments along the context path are essential factors for identifying the persuasive power of an argument.
We propose DisCOC to inject and fuse the sentence-level structural information with contextualized features derived from large-scale language models.
arXiv Detail & Related papers (2021-06-02T06:49:19Z) - Analogical Proportions [0.0]
This paper introduces an abstract framework of analogical proportions of the form $a$ is to $b$ what $c$ is to $d$' in the general setting of universal algebra.
It turns out that our notion of analogical proportions has appealing mathematical properties.
This paper is a first step towards a theory of analogical reasoning and learning systems with potential applications to fundamental AI-problems.
arXiv Detail & Related papers (2020-06-04T13:44:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.