Partitioning signal classes using transport transforms for data analysis
and machine learning
- URL: http://arxiv.org/abs/2008.03452v2
- Date: Wed, 24 Feb 2021 12:48:08 GMT
- Title: Partitioning signal classes using transport transforms for data analysis
and machine learning
- Authors: Akram Aldroubi, Shiying Li, Gustavo K. Rohde
- Abstract summary: A new set of transport-based transforms (CDT, R-CDT, LOT) have shown their strength and great potential in various image and data processing tasks.
This paper will serve as an introduction to these transforms and will encourage mathematicians and other researchers to further explore the theoretical underpinnings and algorithmic tools.
- Score: 8.111947517189641
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A relatively new set of transport-based transforms (CDT, R-CDT, LOT) have
shown their strength and great potential in various image and data processing
tasks such as parametric signal estimation, classification, cancer detection
among many others. It is hence worthwhile to elucidate some of the mathematical
properties that explain the successes of these transforms when they are used as
tools in data analysis, signal processing or data classification. In
particular, we give conditions under which classes of signals that are created
by algebraic generative models are transformed into convex sets by the
transport transforms. Such convexification of the classes simplify the
classification and other data analysis and processing problems when viewed in
the transform domain. More specifically, we study the extent and limitation of
the convexification ability of these transforms under an algebraic generative
modeling framework. We hope that this paper will serve as an introduction to
these transforms and will encourage mathematicians and other researchers to
further explore the theoretical underpinnings and algorithmic tools that will
help understand the successes of these transforms and lay the groundwork for
further successful applications.
Related papers
- Learning on Transformers is Provable Low-Rank and Sparse: A One-layer Analysis [63.66763657191476]
We show that efficient numerical training and inference algorithms as low-rank computation have impressive performance for learning Transformer-based adaption.
We analyze how magnitude-based models affect generalization while improving adaption.
We conclude that proper magnitude-based has a slight on the testing performance.
arXiv Detail & Related papers (2024-06-24T23:00:58Z) - Learning Explicitly Conditioned Sparsifying Transforms [7.335712499936904]
We consider a new sparsifying transform model that enforces explicit control over the data representation quality and the condition number of the learned transforms.
We confirm through numerical experiments that our model presents better numerical behavior than the state-of-the-art.
arXiv Detail & Related papers (2024-03-05T18:03:51Z) - Geometrically Aligned Transfer Encoder for Inductive Transfer in
Regression Tasks [5.038936775643437]
We propose a novel transfer technique based on differential geometry, namely the Geometrically Aligned Transfer (GATE)
We find a proper diffeomorphism between pairs of tasks to ensure that every arbitrary point maps to a locally flat coordinate in the overlapping region, allowing the transfer of knowledge from the source to the target data.
GATE outperforms conventional methods and exhibits stable behavior in both the latent space and extrapolation regions for various molecular graph datasets.
arXiv Detail & Related papers (2023-10-10T07:11:25Z) - In-Context Convergence of Transformers [63.04956160537308]
We study the learning dynamics of a one-layer transformer with softmax attention trained via gradient descent.
For data with imbalanced features, we show that the learning dynamics take a stage-wise convergence process.
arXiv Detail & Related papers (2023-10-08T17:55:33Z) - Unsupervised Learning of Invariance Transformations [105.54048699217668]
We develop an algorithmic framework for finding approximate graph automorphisms.
We discuss how this framework can be used to find approximate automorphisms in weighted graphs in general.
arXiv Detail & Related papers (2023-07-24T17:03:28Z) - Learning Lie Group Symmetry Transformations with Neural Networks [17.49001206996365]
This work focuses on discovering and characterizing unknown symmetries present in the dataset, namely, Lie group symmetry transformations.
Our goal is to characterize the transformation group and the distribution of the parameter values.
Results showcase the effectiveness of the approach in both these settings.
arXiv Detail & Related papers (2023-07-04T09:23:24Z) - Data augmentation with mixtures of max-entropy transformations for
filling-level classification [88.14088768857242]
We address the problem of distribution shifts in test-time data with a principled data augmentation scheme for the task of content-level classification.
We show that such a principled augmentation scheme, alone, can replace current approaches that use transfer learning or can be used in combination with transfer learning to improve its performance.
arXiv Detail & Related papers (2022-03-08T11:41:38Z) - Topographic VAEs learn Equivariant Capsules [84.33745072274942]
We introduce the Topographic VAE: a novel method for efficiently training deep generative models with topographically organized latent variables.
We show that such a model indeed learns to organize its activations according to salient characteristics such as digit class, width, and style on MNIST.
We demonstrate approximate equivariance to complex transformations, expanding upon the capabilities of existing group equivariant neural networks.
arXiv Detail & Related papers (2021-09-03T09:25:57Z) - Disentangling images with Lie group transformations and sparse coding [3.3454373538792552]
We train a model that learns to disentangle spatial patterns and their continuous transformations in a completely unsupervised manner.
Training the model on a dataset consisting of controlled geometric transformations of specific MNIST digits shows that it can recover these transformations along with the digits.
arXiv Detail & Related papers (2020-12-11T19:11:32Z) - A Compact Spectral Descriptor for Shape Deformations [0.8268443804509721]
We propose a novel methodology to obtain a parameterization of a component's plastic deformation behavior under stress.
Existing parameterizations limit computational analysis to relatively simple deformations.
We propose a way to derive a compact descriptor of deformation behavior based on spectral mesh processing.
arXiv Detail & Related papers (2020-03-10T10:34:30Z) - On Compositions of Transformations in Contrastive Self-Supervised
Learning [66.15514035861048]
In this paper, we generalize contrastive learning to a wider set of transformations.
We find that being invariant to certain transformations and distinctive to others is critical to learning effective video representations.
arXiv Detail & Related papers (2020-03-09T17:56:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.