Automatic Discovery of One-Parameter Subgroups of Lie Groups: Compact and Non-Compact Cases of $\mathbf{SO(n)}$ and $\mathbf{SL(n)}$
- URL: http://arxiv.org/abs/2509.22219v3
- Date: Tue, 04 Nov 2025 19:00:00 GMT
- Title: Automatic Discovery of One-Parameter Subgroups of Lie Groups: Compact and Non-Compact Cases of $\mathbf{SO(n)}$ and $\mathbf{SL(n)}$
- Authors: Pavan Karjol, Vivek V Kashyap, Rohan Kashyap, Prathosh A P,
- Abstract summary: We introduce a novel framework for the automatic discovery of one- parameter subgroups of $SO(3)$ and, more generally, $SO(n)$.<n>Our method utilizes the standard Jordan form of skew-symmetric matrices, which define the Lie algebra of $SO(n)$.<n>The effectiveness of the proposed approach is demonstrated through tasks such as pendulum modeling, moment of inertia prediction, top quark tagging and invariant regression.
- Score: 7.771878859878091
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a novel framework for the automatic discovery of one-parameter subgroups ($H_{\gamma}$) of $SO(3)$ and, more generally, $SO(n)$. One-parameter subgroups of $SO(n)$ are crucial in a wide range of applications, including robotics, quantum mechanics, and molecular structure analysis. Our method utilizes the standard Jordan form of skew-symmetric matrices, which define the Lie algebra of $SO(n)$, to establish a canonical form for orbits under the action of $H_{\gamma}$. This canonical form is then employed to derive a standardized representation for $H_{\gamma}$-invariant functions. By learning the appropriate parameters, the framework uncovers the underlying one-parameter subgroup $H_{\gamma}$. The effectiveness of the proposed approach is demonstrated through tasks such as double pendulum modeling, moment of inertia prediction, top quark tagging and invariant polynomial regression, where it successfully recovers meaningful subgroup structure and produces interpretable, symmetry-aware representations.
Related papers
- Fast $k$-means clustering in Riemannian manifolds via Fréchet maps: Applications to large-dimensional SPD matrices [4.958499383330019]
Key innovation is the use of the $p$-Fréchet map $Fp : mathcalM to mathbbRell$.<n>We rigorously analyze the mathematical properties of $Fp$ in the Euclidean space.<n>Our method reduces runtime by up to two orders of magnitude compared to intrinsic manifold-based approaches.
arXiv Detail & Related papers (2025-11-12T05:28:31Z) - Parameter-free Algorithms for the Stochastically Extended Adversarial Model [59.81852138768642]
Existing approaches for the Extended Adversarial (SEA) model require prior knowledge of problem-specific parameters, such as the diameter of the domain $D$ and the Lipschitz constant of the loss functions $G$.<n>We develop parameter-free methods by leveraging the Optimistic Online Newton Step (OONS) algorithm to eliminate the need for these parameters.
arXiv Detail & Related papers (2025-10-06T10:53:37Z) - The Generative Leap: Sharp Sample Complexity for Efficiently Learning Gaussian Multi-Index Models [71.5283441529015]
In this work we consider generic Gaussian Multi-index models, in which the labels only depend on the (Gaussian) $d$-dimensional inputs through their projection onto a low-dimensional $r = O_d(1)$ subspace.<n>We introduce the generative leap exponent $kstar$, a natural extension of the generative exponent from [Damian et al.'24] to the multi-index setting.
arXiv Detail & Related papers (2025-06-05T18:34:56Z) - Symmetry-Breaking Descent for Invariant Cost Functionals [0.0]
We study the problem of reducing a task cost functional $W : Hs(M) to mathbbR$, not assumed continuous or differentiable.<n>We show that symmetry-breaking deformations of the signal can reduce the cost.
arXiv Detail & Related papers (2025-05-19T15:06:31Z) - The Sample Complexity of Online Reinforcement Learning: A Multi-model Perspective [55.15192437680943]
We study the sample complexity of online reinforcement learning in the general setting of nonlinear dynamical systems with continuous state and action spaces.<n>Our algorithm achieves a policy regret of $mathcalO(N epsilon2 + mathrmln(m(epsilon)/epsilon2)$, where $epsilon$ is the time horizon.<n>In the special case where the dynamics are parametrized by a compact and real-valued set of parameters, we prove a policy regret of $mathcalO(sqrt
arXiv Detail & Related papers (2025-01-27T10:01:28Z) - Second quantization for classical nonlinear dynamics [0.0]
We propose a framework for representing the evolution of observables of measure-preserving ergodic flows through infinite-dimensional rotation systems on tori.<n>We show that their Banach algebra spectra, $sigma(F_w(mathcal H_tau)$, decompose into a family of tori of potentially infinite dimension.<n>Our scheme also employs a procedure for representing observables of the original system by reproducing functions on finite-dimensional tori in $sigma(F_w(mathcal H_tau)$ of arbitrarily large degree.
arXiv Detail & Related papers (2025-01-13T15:36:53Z) - Lie Group Decompositions for Equivariant Neural Networks [12.139222986297261]
We show how convolution kernels can be parametrized to build models equivariant with respect to affine transformations.
We evaluate the robustness and out-of-distribution generalisation capability of our model on the benchmark affine-invariant classification task.
arXiv Detail & Related papers (2023-10-17T16:04:33Z) - Deep Learning Symmetries and Their Lie Groups, Algebras, and Subalgebras
from First Principles [55.41644538483948]
We design a deep-learning algorithm for the discovery and identification of the continuous group of symmetries present in a labeled dataset.
We use fully connected neural networks to model the transformations symmetry and the corresponding generators.
Our study also opens the door for using a machine learning approach in the mathematical study of Lie groups and their properties.
arXiv Detail & Related papers (2023-01-13T16:25:25Z) - Brauer's Group Equivariant Neural Networks [0.0]
We provide a full characterisation of all of the possible group equivariant neural networks whose layers are some tensor power of $mathbbRn$.
We find a spanning set of matrices for the learnable, linear, equivariant layer functions between such tensor power spaces.
arXiv Detail & Related papers (2022-12-16T18:08:51Z) - Algebraic Aspects of Boundaries in the Kitaev Quantum Double Model [77.34726150561087]
We provide a systematic treatment of boundaries based on subgroups $Ksubseteq G$ with the Kitaev quantum double $D(G)$ model in the bulk.
The boundary sites are representations of a $*$-subalgebra $Xisubseteq D(G)$ and we explicate its structure as a strong $*$-quasi-Hopf algebra.
As an application of our treatment, we study patches with boundaries based on $K=G$ horizontally and $K=e$ vertically and show how these could be used in a quantum computer
arXiv Detail & Related papers (2022-08-12T15:05:07Z) - A Practical Method for Constructing Equivariant Multilayer Perceptrons
for Arbitrary Matrix Groups [115.58550697886987]
We provide a completely general algorithm for solving for the equivariant layers of matrix groups.
In addition to recovering solutions from other works as special cases, we construct multilayer perceptrons equivariant to multiple groups that have never been tackled before.
Our approach outperforms non-equivariant baselines, with applications to particle physics and dynamical systems.
arXiv Detail & Related papers (2021-04-19T17:21:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.