Galois groups of polynomials and neurosymbolic networks
- URL: http://arxiv.org/abs/2501.12978v1
- Date: Wed, 22 Jan 2025 16:05:59 GMT
- Title: Galois groups of polynomials and neurosymbolic networks
- Authors: Elira Shaska, Tony Shaska,
- Abstract summary: This paper introduces a novel approach to understanding Galois theory, one of the foundational areas of algebra, through the lens of machine learning.<n>By analyzing equations with machine learning techniques, we aim to streamline the process of determining solvability by radicals and explore broader applications within Galois theory.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces a novel approach to understanding Galois theory, one of the foundational areas of algebra, through the lens of machine learning. By analyzing polynomial equations with machine learning techniques, we aim to streamline the process of determining solvability by radicals and explore broader applications within Galois theory. This summary encapsulates the background, methodology, potential applications, and challenges of using data science in Galois theory. More specifically, we design a neurosymbolic network to classify Galois groups and show how this is more efficient than usual neural networks. We discover some very interesting distribution of polynomials for groups not isomorphic to the symmetric groups and alternating groups.
Related papers
- From Polynomials to Databases: Arithmetic Structures in Galois Theory [0.0]
We develop a framework for classifying Galois groups of irreducible degree-7s over$mathbbQ$, combining explicit resolvent methods with machine learning techniques.<n>A database of over one million normalizedive project septics is constructed, each annotated with invariants$J_0, dots, J_4$ derived from binary transvections.
arXiv Detail & Related papers (2025-11-20T18:29:38Z) - Machines Learn Number Fields, But How? The Case of Galois Groups [0.8287206589886881]
We study how simple models can classify the Galois groups of Galois extensions over $mathbbQ$ of degrees 4, 6, 8, 9, and 10.<n>Our interpretation of the machine learning results allows us to understand how the distribution of zeta coefficients depends on the Galois group.
arXiv Detail & Related papers (2025-08-08T19:32:11Z) - Neuro-Symbolic Learning for Galois Groups: Unveiling Probabilistic Trends in Polynomials [0.0]
This paper presents a neurosymbolic approach to classifying Galois groups of irreducibles.
By combining neural networks with symbolic reasoning we develop a model that outperforms purely numerical methods in accuracy and interpretability.
This work paves the way for future research in computational algebra, with implications for conjectures and higher degree classifications.
arXiv Detail & Related papers (2025-02-28T08:42:57Z) - Sheaf theory: from deep geometry to deep learning [0.3749861135832073]
This paper provides an overview of the applications of sheaf theory in deep learning, data science, and computer science.
We describe intuitions and motivations underlying sheaf theory shared by both theoretical researchers and practitioners.
We present a new algorithm to compute sheaf cohomology on arbitrary finite posets in response.
arXiv Detail & Related papers (2025-02-21T14:00:25Z) - An Invitation to Neuroalgebraic Geometry [6.369393363312528]
We promote the study of function spaces parameterized by machine learning models through the lens of algebraic geometry.
We outline a dictionary between algebro-geometric invariants of varieties, such as dimension, degree, and singularities.
Work lays the foundations of a research direction bridging algebraic geometry and deep learning.
arXiv Detail & Related papers (2025-01-31T06:33:58Z) - Towards a Categorical Foundation of Deep Learning: A Survey [0.0]
This thesis is a survey that covers some recent work attempting to study machine learning categorically.
acting as a lingua franca of mathematics and science, category theory might be able to give a unifying structure to the field of machine learning.
arXiv Detail & Related papers (2024-10-07T13:11:16Z) - Applying language models to algebraic topology: generating simplicial
cycles using multi-labeling in Wu's formula [0.0]
We take a step towards the goal of comprehending the group-theoretic structure of the generators of homotopy groups by leveraging the power of machine learning.
We present and evaluate language modelling approaches that employ multi-label information for input sequences, along with the necessary group-theoretic toolkit and non-neural baselines.
arXiv Detail & Related papers (2023-06-01T12:23:14Z) - Deep Learning Symmetries and Their Lie Groups, Algebras, and Subalgebras
from First Principles [55.41644538483948]
We design a deep-learning algorithm for the discovery and identification of the continuous group of symmetries present in a labeled dataset.
We use fully connected neural networks to model the transformations symmetry and the corresponding generators.
Our study also opens the door for using a machine learning approach in the mathematical study of Lie groups and their properties.
arXiv Detail & Related papers (2023-01-13T16:25:25Z) - Multiparameter Persistent Homology-Generic Structures and Quantum
Computing [0.0]
This article is an application of commutative algebra to the study of persistent homology in topological data analysis.
The generic structure of such resolutions and the classifying spaces are studied using results spanning several decades of research.
arXiv Detail & Related papers (2022-10-20T17:30:20Z) - A tradeoff between universality of equivariant models and learnability
of symmetries [0.0]
We prove that it is impossible to simultaneously learn symmetries and functions equivariant under certain conditions.
We analyze certain families of neural networks for whether they satisfy the conditions of the impossibility result.
On the practical side, our analysis of group-convolutional neural networks allows us generalize the well-known convolution is all you need'' to non-homogeneous spaces.
arXiv Detail & Related papers (2022-10-17T21:23:22Z) - Learning Algebraic Representation for Systematic Generalization in
Abstract Reasoning [109.21780441933164]
We propose a hybrid approach to improve systematic generalization in reasoning.
We showcase a prototype with algebraic representation for the abstract spatial-temporal task of Raven's Progressive Matrices (RPM)
We show that the algebraic representation learned can be decoded by isomorphism to generate an answer.
arXiv Detail & Related papers (2021-11-25T09:56:30Z) - A Practical Method for Constructing Equivariant Multilayer Perceptrons
for Arbitrary Matrix Groups [115.58550697886987]
We provide a completely general algorithm for solving for the equivariant layers of matrix groups.
In addition to recovering solutions from other works as special cases, we construct multilayer perceptrons equivariant to multiple groups that have never been tackled before.
Our approach outperforms non-equivariant baselines, with applications to particle physics and dynamical systems.
arXiv Detail & Related papers (2021-04-19T17:21:54Z) - LieTransformer: Equivariant self-attention for Lie Groups [49.9625160479096]
Group equivariant neural networks are used as building blocks of group invariant neural networks.
We extend the scope of the literature to self-attention, that is emerging as a prominent building block of deep learning models.
We propose the LieTransformer, an architecture composed of LieSelfAttention layers that are equivariant to arbitrary Lie groups and their discrete subgroups.
arXiv Detail & Related papers (2020-12-20T11:02:49Z) - Algebraic Neural Networks: Stability to Deformations [179.55535781816343]
We study algebraic neural networks (AlgNNs) with commutative algebras.
AlgNNs unify diverse architectures such as Euclidean convolutional neural networks, graph neural networks, and group neural networks.
arXiv Detail & Related papers (2020-09-03T03:41:38Z) - Stochastic Flows and Geometric Optimization on the Orthogonal Group [52.50121190744979]
We present a new class of geometrically-driven optimization algorithms on the orthogonal group $O(d)$.
We show that our methods can be applied in various fields of machine learning including deep, convolutional and recurrent neural networks, reinforcement learning, flows and metric learning.
arXiv Detail & Related papers (2020-03-30T15:37:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.