Flexible Moment-Invariant Bases from Irreducible Tensors
- URL: http://arxiv.org/abs/2503.21939v2
- Date: Thu, 03 Apr 2025 16:25:35 GMT
- Title: Flexible Moment-Invariant Bases from Irreducible Tensors
- Authors: Roxana Bujack, Emily Shinkle, Alice Allen, Tomas Suk, Nicholas Lubbers,
- Abstract summary: A set of invariants is optimal if it is complete, independent, and robust against degeneracy in the input.<n>We show how to overcome this vulnerability by combining two popular moment invariant approaches.
- Score: 0.8248781893273871
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Moment invariants are a powerful tool for the generation of rotation-invariant descriptors needed for many applications in pattern detection, classification, and machine learning. A set of invariants is optimal if it is complete, independent, and robust against degeneracy in the input. In this paper, we show that the current state of the art for the generation of these bases of moment invariants, despite being robust against moment tensors being identically zero, is vulnerable to a degeneracy that is common in real-world applications, namely spherical functions. We show how to overcome this vulnerability by combining two popular moment invariant approaches: one based on spherical harmonics and one based on Cartesian tensor algebra.
Related papers
- Irregular Tensor Low-Rank Representation for Hyperspectral Image Representation [71.69331824668954]
Spectral variations pose a common challenge in analyzing hyperspectral images (HSI)<n>Low-rank tensor representation has emerged as a robust strategy, leveraging inherent correlations within HSI data.<n>We propose a novel model for irregular tensor lowrank representation tailored to efficiently model irregular 3D cubes.
arXiv Detail & Related papers (2024-10-24T02:56:22Z) - Tensor cumulants for statistical inference on invariant distributions [49.80012009682584]
We show that PCA becomes computationally hard at a critical value of the signal's magnitude.
We define a new set of objects, which provide an explicit, near-orthogonal basis for invariants of a given degree.
It also lets us analyze a new problem of distinguishing between different ensembles.
arXiv Detail & Related papers (2024-04-29T14:33:24Z) - Stochastic Collapse: How Gradient Noise Attracts SGD Dynamics Towards Simpler Subnetworks [28.87871359825978]
We reveal a strong implicit bias of gradient of descent (SGD) that drives overly expressive networks to much simplerworks.
We focus on two classes of invariant sets that correspond to simpler (sparse or low-rank)works and commonly appear in modern architectures.
We observe empirically the existence of attractive invariant sets in trained deep neural networks, implying that SGD dynamics often collapses vanishing simpleworks with either redundant neurons.
arXiv Detail & Related papers (2023-06-07T08:44:51Z) - Unifying O(3) Equivariant Neural Networks Design with Tensor-Network Formalism [12.008737454250463]
We propose using fusion diagrams, a technique widely employed in simulating SU($2$)-symmetric quantum many-body problems, to design new equivariant components for equivariant neural networks.
When applied to particles within a given local neighborhood, the resulting components, which we term "fusion blocks," serve as universal approximators of any continuous equivariant function.
Our approach, which combines tensor networks with equivariant neural networks, suggests a potentially fruitful direction for designing more expressive equivariant neural networks.
arXiv Detail & Related papers (2022-11-14T16:06:59Z) - Equivariance with Learned Canonicalization Functions [77.32483958400282]
We show that learning a small neural network to perform canonicalization is better than using predefineds.
Our experiments show that learning the canonicalization function is competitive with existing techniques for learning equivariant functions across many tasks.
arXiv Detail & Related papers (2022-11-11T21:58:15Z) - Sufficient Invariant Learning for Distribution Shift [20.88069274935592]
We introduce a novel learning principle called the Sufficient Invariant Learning (SIL) framework.
SIL focuses on learning a sufficient subset of invariant features rather than relying on a single feature.
We propose a new algorithm, Adaptive Sharpness-aware Group Distributionally Robust Optimization (ASGDRO), to learn diverse invariant features by seeking common flat minima.
arXiv Detail & Related papers (2022-10-24T18:34:24Z) - Group-invariant tensor train networks for supervised learning [0.0]
We introduce a new numerical algorithm to construct a basis of tensors that are invariant under the action of normal matrix representations.
The group-invariant tensors are then combined into a group-invariant tensor train network, which can be used as a supervised machine learning model.
arXiv Detail & Related papers (2022-06-30T06:33:08Z) - Unified Fourier-based Kernel and Nonlinearity Design for Equivariant
Networks on Homogeneous Spaces [52.424621227687894]
We introduce a unified framework for group equivariant networks on homogeneous spaces.
We take advantage of the sparsity of Fourier coefficients of the lifted feature fields.
We show that other methods treating features as the Fourier coefficients in the stabilizer subgroup are special cases of our activation.
arXiv Detail & Related papers (2022-06-16T17:59:01Z) - Low Dimensional Invariant Embeddings for Universal Geometric Learning [6.405957390409045]
This paper studies separating invariants: mappings on $D$ dimensional domains which are invariant to an appropriate group action, and which separate orbits.
The motivation for this study comes from the usefulness of separating invariants in proving universality of equivariant neural network architectures.
arXiv Detail & Related papers (2022-05-05T22:56:19Z) - Frame Averaging for Invariant and Equivariant Network Design [50.87023773850824]
We introduce Frame Averaging (FA), a framework for adapting known (backbone) architectures to become invariant or equivariant to new symmetry types.
We show that FA-based models have maximal expressive power in a broad setting.
We propose a new class of universal Graph Neural Networks (GNNs), universal Euclidean motion invariant point cloud networks, and Euclidean motion invariant Message Passing (MP) GNNs.
arXiv Detail & Related papers (2021-10-07T11:05:23Z) - LieTransformer: Equivariant self-attention for Lie Groups [49.9625160479096]
Group equivariant neural networks are used as building blocks of group invariant neural networks.
We extend the scope of the literature to self-attention, that is emerging as a prominent building block of deep learning models.
We propose the LieTransformer, an architecture composed of LieSelfAttention layers that are equivariant to arbitrary Lie groups and their discrete subgroups.
arXiv Detail & Related papers (2020-12-20T11:02:49Z) - Convergence of a Stochastic Gradient Method with Momentum for Non-Smooth
Non-Convex Optimization [25.680334940504405]
This paper establishes the convergence of the rate of a non-smooth subient method with momentum for constrained problems.
For problems, we show how the unconstrained case can be analyzed under weaker assumptions than the state-of-the-art.
arXiv Detail & Related papers (2020-02-13T12:10:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.