Group Invariant Dictionary Learning
- URL: http://arxiv.org/abs/2007.07550v2
- Date: Sat, 5 Jun 2021 04:58:41 GMT
- Title: Group Invariant Dictionary Learning
- Authors: Yong Sheng Soh
- Abstract summary: We develop a framework for learning dictionaries for data under the constraint that the collection of basic building blocks remains invariant under such symmetries.
Our framework specializes to the convolutional dictionary learning problem when we consider integer shifts.
Our numerical experiments on synthetic data and ECG data show that the incorporation of such symmetries as priors are most valuable when the dataset has few data-points.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The dictionary learning problem concerns the task of representing data as
sparse linear sums drawn from a smaller collection of basic building blocks. In
application domains where such techniques are deployed, we frequently encounter
datasets where some form of symmetry or invariance is present. Motivated by
this observation, we develop a framework for learning dictionaries for data
under the constraint that the collection of basic building blocks remains
invariant under such symmetries. Our procedure for learning such dictionaries
relies on representing the symmetry as the action of a matrix group acting on
the data, and subsequently introducing a convex penalty function so as to
induce sparsity with respect to the collection of matrix group elements. Our
framework specializes to the convolutional dictionary learning problem when we
consider integer shifts. Using properties of positive semidefinite Hermitian
Toeplitz matrices, we develop an extension that learns dictionaries that are
invariant under continuous shifts. Our numerical experiments on synthetic data
and ECG data show that the incorporation of such symmetries as priors are most
valuable when the dataset has few data-points, or when the full range of
symmetries is inadequately expressed in the dataset.
Related papers
- Invariant Kernels: Rank Stabilization and Generalization Across Dimensions [7.154556482351604]
We show that symmetry has a pronounced impact on the rank of kernel matrices.
Specifically, we compute the rank of a kernel of fixed degree that is invariant under various groups acting independently on its two arguments.
In concrete circumstances, including the three aforementioned examples, symmetry dramatically decreases the rank making it independent of the data dimension.
arXiv Detail & Related papers (2025-02-03T23:37:43Z) - Learning Symmetries via Weight-Sharing with Doubly Stochastic Tensors [46.59269589647962]
Group equivariance has emerged as a valuable inductive bias in deep learning.
Group equivariant methods require the groups of interest to be known beforehand.
We show that when the dataset exhibits strong symmetries, the permutation matrices will converge to regular group representations.
arXiv Detail & Related papers (2024-12-05T20:15:34Z) - Learning Infinitesimal Generators of Continuous Symmetries from Data [15.42275880523356]
We propose a novel symmetry learning algorithm based on transformations defined with one- parameter groups.
Our method is built upon minimal inductive biases, encompassing not only commonly utilized symmetries rooted in Lie groups but also extending to symmetries derived from nonlinear generators.
arXiv Detail & Related papers (2024-10-29T08:28:23Z) - Symmetry Discovery for Different Data Types [52.2614860099811]
Equivariant neural networks incorporate symmetries into their architecture, achieving higher generalization performance.
We propose LieSD, a method for discovering symmetries via trained neural networks which approximate the input-output mappings of the tasks.
We validate the performance of LieSD on tasks with symmetries such as the two-body problem, the moment of inertia matrix prediction, and top quark tagging.
arXiv Detail & Related papers (2024-10-13T13:39:39Z) - Accelerated Discovery of Machine-Learned Symmetries: Deriving the
Exceptional Lie Groups G2, F4 and E6 [55.41644538483948]
This letter introduces two improved algorithms that significantly speed up the discovery of symmetry transformations.
Given the significant complexity of the exceptional Lie groups, our results demonstrate that this machine-learning method for discovering symmetries is completely general and can be applied to a wide variety of labeled datasets.
arXiv Detail & Related papers (2023-07-10T20:25:44Z) - Dictionary Learning under Symmetries via Group Representations [1.304892050913381]
We study the problem of learning a dictionary that is invariant under a pre-specified group of transformations.
We apply our paradigm to investigate the dictionary learning problem for the groups SO(2) and SO(3).
arXiv Detail & Related papers (2023-05-31T04:54:06Z) - Oracle-Preserving Latent Flows [58.720142291102135]
We develop a methodology for the simultaneous discovery of multiple nontrivial continuous symmetries across an entire labelled dataset.
The symmetry transformations and the corresponding generators are modeled with fully connected neural networks trained with a specially constructed loss function.
The two new elements in this work are the use of a reduced-dimensionality latent space and the generalization to transformations invariant with respect to high-dimensional oracles.
arXiv Detail & Related papers (2023-02-02T00:13:32Z) - Deep Learning Symmetries and Their Lie Groups, Algebras, and Subalgebras
from First Principles [55.41644538483948]
We design a deep-learning algorithm for the discovery and identification of the continuous group of symmetries present in a labeled dataset.
We use fully connected neural networks to model the transformations symmetry and the corresponding generators.
Our study also opens the door for using a machine learning approach in the mathematical study of Lie groups and their properties.
arXiv Detail & Related papers (2023-01-13T16:25:25Z) - Learning Log-Determinant Divergences for Positive Definite Matrices [47.61701711840848]
In this paper, we propose to learn similarity measures in a data-driven manner.
We capitalize on the alphabeta-log-det divergence, which is a meta-divergence parametrized by scalars alpha and beta.
Our key idea is to cast these parameters in a continuum and learn them from data.
arXiv Detail & Related papers (2021-04-13T19:09:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.