K\"ahler Geometry of Quiver Varieties and Machine Learning
- URL: http://arxiv.org/abs/2101.11487v2
- Date: Wed, 10 Feb 2021 16:09:49 GMT
- Title: K\"ahler Geometry of Quiver Varieties and Machine Learning
- Authors: George Jeffreys and Siu-Cheong Lau
- Abstract summary: We develop an algebro-geometric formulation for neural networks in machine learning using the moduli space of framed representations quiver.
We prove the universal approximation theorem for the multi-variable activation function constructed from the complex projective space.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We develop an algebro-geometric formulation for neural networks in machine
learning using the moduli space of framed quiver representations. We find
natural Hermitian metrics on the universal bundles over the moduli which are
compatible with the GIT quotient construction by the general linear group, and
show that their Ricci curvatures give a K\"ahler metric on the moduli.
Moreover, we use toric moment maps to construct activation functions, and prove
the universal approximation theorem for the multi-variable activation function
constructed from the complex projective space.
Related papers
- On the Geometry and Optimization of Polynomial Convolutional Networks [2.9816332334719773]
We study convolutional neural networks with monomial activation functions.
We compute the dimension and the degree of the neuromanifold, which measure the expressivity of the model.
For a generic large dataset, we derive an explicit formula that quantifies the number of critical points arising in the optimization of a regression loss.
arXiv Detail & Related papers (2024-10-01T14:13:05Z) - RMLR: Extending Multinomial Logistic Regression into General Geometries [64.16104856124029]
Our framework only requires minimal geometric properties, thus exhibiting broad applicability.
We develop five families of SPD MLRs under five types of power-deformed metrics.
On rotation matrices we propose Lie MLR based on the popular bi-invariant metric.
arXiv Detail & Related papers (2024-09-28T18:38:21Z) - A Unified Framework for Discovering Discrete Symmetries [17.687122467264487]
We consider the problem of learning a function respecting a symmetry from among a class of symmetries.
We develop a unified framework that enables symmetry discovery across a broad range of subgroups.
arXiv Detail & Related papers (2023-09-06T10:41:30Z) - A General Framework for Equivariant Neural Networks on Reductive Lie
Groups [2.0769531810371307]
Reductive Lie Groups play essential roles across scientific fields as diverse as high energy physics, quantum mechanics, quantum chromodynamics, molecular dynamics, computer vision, and imaging.
We present a general Equivariant Neural Network architecture capable of respecting the finite-dimensional representations of any reductive Lie Group G.
arXiv Detail & Related papers (2023-05-31T18:09:37Z) - Oracle-Preserving Latent Flows [58.720142291102135]
We develop a methodology for the simultaneous discovery of multiple nontrivial continuous symmetries across an entire labelled dataset.
The symmetry transformations and the corresponding generators are modeled with fully connected neural networks trained with a specially constructed loss function.
The two new elements in this work are the use of a reduced-dimensionality latent space and the generalization to transformations invariant with respect to high-dimensional oracles.
arXiv Detail & Related papers (2023-02-02T00:13:32Z) - Deep Learning Symmetries and Their Lie Groups, Algebras, and Subalgebras
from First Principles [55.41644538483948]
We design a deep-learning algorithm for the discovery and identification of the continuous group of symmetries present in a labeled dataset.
We use fully connected neural networks to model the transformations symmetry and the corresponding generators.
Our study also opens the door for using a machine learning approach in the mathematical study of Lie groups and their properties.
arXiv Detail & Related papers (2023-01-13T16:25:25Z) - Local and global topological complexity measures OF ReLU neural network functions [0.0]
We apply a piecewise-linear (PL) version of Morse theory due to Grunert-Kuhnel-Rote to define and study new local and global notions of topological complexity.
We show how to construct, for each such F, a canonical polytopal complex K(F) and a deformation retract of the domain onto K(F), yielding a convenient compact model for performing calculations.
arXiv Detail & Related papers (2022-04-12T19:49:13Z) - Noncommutative Geometry of Computational Models and Uniformization for
Framed Quiver Varieties [0.0]
We formulate a mathematical setup for computational neural networks using noncommutative algebras and near-rings.
We study the moduli space of the corresponding framed quiver representations, and find moduli of Euclidean and non-compact types in light of uniformization.
arXiv Detail & Related papers (2022-01-15T18:08:50Z) - A Unifying and Canonical Description of Measure-Preserving Diffusions [60.59592461429012]
A complete recipe of measure-preserving diffusions in Euclidean space was recently derived unifying several MCMC algorithms into a single framework.
We develop a geometric theory that improves and generalises this construction to any manifold.
arXiv Detail & Related papers (2021-05-06T17:36:55Z) - LieTransformer: Equivariant self-attention for Lie Groups [49.9625160479096]
Group equivariant neural networks are used as building blocks of group invariant neural networks.
We extend the scope of the literature to self-attention, that is emerging as a prominent building block of deep learning models.
We propose the LieTransformer, an architecture composed of LieSelfAttention layers that are equivariant to arbitrary Lie groups and their discrete subgroups.
arXiv Detail & Related papers (2020-12-20T11:02:49Z) - A Functional Perspective on Learning Symmetric Functions with Neural
Networks [48.80300074254758]
We study the learning and representation of neural networks defined on measures.
We establish approximation and generalization bounds under different choices of regularization.
The resulting models can be learned efficiently and enjoy generalization guarantees that extend across input sizes.
arXiv Detail & Related papers (2020-08-16T16:34:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.