Machines Learn Number Fields, But How? The Case of Galois Groups
- URL: http://arxiv.org/abs/2508.06670v1
- Date: Fri, 08 Aug 2025 19:32:11 GMT
- Title: Machines Learn Number Fields, But How? The Case of Galois Groups
- Authors: Kyu-Hwan Lee, Seewoo Lee,
- Abstract summary: We study how simple models can classify the Galois groups of Galois extensions over $mathbbQ$ of degrees 4, 6, 8, 9, and 10.<n>Our interpretation of the machine learning results allows us to understand how the distribution of zeta coefficients depends on the Galois group.
- Score: 0.8287206589886881
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: By applying interpretable machine learning methods such as decision trees, we study how simple models can classify the Galois groups of Galois extensions over $\mathbb{Q}$ of degrees 4, 6, 8, 9, and 10, using Dedekind zeta coefficients. Our interpretation of the machine learning results allows us to understand how the distribution of zeta coefficients depends on the Galois group, and to prove new criteria for classifying the Galois groups of these extensions. Combined with previous results, this work provides another example of a new paradigm in mathematical research driven by machine learning.
Related papers
- From Polynomials to Databases: Arithmetic Structures in Galois Theory [0.0]
We develop a framework for classifying Galois groups of irreducible degree-7s over$mathbbQ$, combining explicit resolvent methods with machine learning techniques.<n>A database of over one million normalizedive project septics is constructed, each annotated with invariants$J_0, dots, J_4$ derived from binary transvections.
arXiv Detail & Related papers (2025-11-20T18:29:38Z) - Neuro-Symbolic Learning for Galois Groups: Unveiling Probabilistic Trends in Polynomials [0.0]
This paper presents a neurosymbolic approach to classifying Galois groups of irreducibles.<n>By combining neural networks with symbolic reasoning we develop a model that outperforms purely numerical methods in accuracy and interpretability.<n>This work paves the way for future research in computational algebra, with implications for conjectures and higher degree classifications.
arXiv Detail & Related papers (2025-02-28T08:42:57Z) - Galois groups of polynomials and neurosymbolic networks [0.0]
This paper introduces a novel approach to understanding Galois theory, one of the foundational areas of algebra, through the lens of machine learning.<n>By analyzing equations with machine learning techniques, we aim to streamline the process of determining solvability by radicals and explore broader applications within Galois theory.
arXiv Detail & Related papers (2025-01-22T16:05:59Z) - Grokking Group Multiplication with Cosets [10.255744802963926]
Algorithmic tasks have proven to be a fruitful test ground for interpreting a neural network end-to-end.
We completely reverse engineer fully connected one-hidden layer networks that have grokked'' the arithmetic of the permutation groups $S_5$ and $S_6$.
We relate how we reverse engineered the model's mechanisms and confirm our theory was a faithful description of the circuit's functionality.
arXiv Detail & Related papers (2023-12-11T18:12:18Z) - Learning to be Simple [0.0]
We employ machine learning to understand structured mathematical data involving finite groups.
We derive a theorem about necessary properties of generators of finite simple groups.
Our work highlights the possibility of generating new conjectures and theorems in mathematics with the aid of machine learning.
arXiv Detail & Related papers (2023-12-08T19:00:00Z) - Enriching Diagrams with Algebraic Operations [49.1574468325115]
We extend diagrammatic reasoning in monoidal categories with algebraic operations and equations.
We show how this construction can be used for diagrammatic reasoning of noise in quantum systems.
arXiv Detail & Related papers (2023-10-17T14:12:39Z) - TRIGO: Benchmarking Formal Mathematical Proof Reduction for Generative
Language Models [68.65075559137608]
We propose TRIGO, an ATP benchmark that not only requires a model to reduce a trigonometric expression with step-by-step proofs but also evaluates a generative LM's reasoning ability on formulas.
We gather trigonometric expressions and their reduced forms from the web, annotate the simplification process manually, and translate it into the Lean formal language system.
We develop an automatic generator based on Lean-Gym to create dataset splits of varying difficulties and distributions in order to thoroughly analyze the model's generalization ability.
arXiv Detail & Related papers (2023-10-16T08:42:39Z) - Discovering Sparse Representations of Lie Groups with Machine Learning [55.41644538483948]
We show that our method reproduces the canonical representations of the generators of the Lorentz group.
This approach is completely general and can be used to find the infinitesimal generators for any Lie group.
arXiv Detail & Related papers (2023-02-10T17:12:05Z) - Commutative Lie Group VAE for Disentanglement Learning [96.32813624341833]
We view disentanglement learning as discovering an underlying structure that equivariantly reflects the factorized variations shown in data.
A simple model named Commutative Lie Group VAE is introduced to realize the group-based disentanglement learning.
Experiments show that our model can effectively learn disentangled representations without supervision, and can achieve state-of-the-art performance without extra constraints.
arXiv Detail & Related papers (2021-06-07T07:03:14Z) - LieTransformer: Equivariant self-attention for Lie Groups [49.9625160479096]
Group equivariant neural networks are used as building blocks of group invariant neural networks.
We extend the scope of the literature to self-attention, that is emerging as a prominent building block of deep learning models.
We propose the LieTransformer, an architecture composed of LieSelfAttention layers that are equivariant to arbitrary Lie groups and their discrete subgroups.
arXiv Detail & Related papers (2020-12-20T11:02:49Z) - Deducing neighborhoods of classes from a fitted model [68.8204255655161]
In this article a new kind of interpretable machine learning method is presented.
It can help to understand the partitioning of the feature space into predicted classes in a classification model using quantile shifts.
Basically, real data points (or specific points of interest) are used and the changes of the prediction after slightly raising or decreasing specific features are observed.
arXiv Detail & Related papers (2020-09-11T16:35:53Z) - Lattice Representation Learning [6.427169570069738]
We introduce theory and algorithms for learning discrete representations that take on a lattice that is embedded in an Euclidean space.
Lattice representations possess an interesting combination of properties: a) they can be computed explicitly using lattice quantization, yet they can be learned efficiently using the ideas we introduce.
This article will focus on laying the groundwork for exploring and exploiting the first two properties, including a new mathematical result linking expressions used during training and inference time and experimental validation on two popular datasets.
arXiv Detail & Related papers (2020-06-24T16:05:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.