Learning Explicitly Conditioned Sparsifying Transforms
- URL: http://arxiv.org/abs/2403.03168v1
- Date: Tue, 5 Mar 2024 18:03:51 GMT
- Title: Learning Explicitly Conditioned Sparsifying Transforms
- Authors: Andrei P\u{a}tra\c{s}cu, Cristian Rusu, Paul Irofti
- Abstract summary: We consider a new sparsifying transform model that enforces explicit control over the data representation quality and the condition number of the learned transforms.
We confirm through numerical experiments that our model presents better numerical behavior than the state-of-the-art.
- Score: 7.335712499936904
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sparsifying transforms became in the last decades widely known tools for
finding structured sparse representations of signals in certain transform
domains. Despite the popularity of classical transforms such as DCT and
Wavelet, learning optimal transforms that guarantee good representations of
data into the sparse domain has been recently analyzed in a series of papers.
Typically, the conditioning number and representation ability are complementary
key features of learning square transforms that may not be explicitly
controlled in a given optimization model. Unlike the existing approaches from
the literature, in our paper, we consider a new sparsifying transform model
that enforces explicit control over the data representation quality and the
condition number of the learned transforms. We confirm through numerical
experiments that our model presents better numerical behavior than the
state-of-the-art.
Related papers
- Learning on Transformers is Provable Low-Rank and Sparse: A One-layer Analysis [63.66763657191476]
We show that efficient numerical training and inference algorithms as low-rank computation have impressive performance for learning Transformer-based adaption.
We analyze how magnitude-based models affect generalization while improving adaption.
We conclude that proper magnitude-based has a slight on the testing performance.
arXiv Detail & Related papers (2024-06-24T23:00:58Z) - Domain Generalization In Robust Invariant Representation [10.132611239890345]
In this paper, we investigate the generalization of invariant representations on out-of-distribution data.
We show that the invariant model learns unstructured latent representations that are robust to distribution shifts.
arXiv Detail & Related papers (2023-04-07T00:58:30Z) - Entropy optimized semi-supervised decomposed vector-quantized
variational autoencoder model based on transfer learning for multiclass text
classification and generation [3.9318191265352196]
We propose a semisupervised discrete latent variable model for multi-class text classification and text generation.
The proposed model employs the concept of transfer learning for training a quantized transformer model.
Experimental results indicate that the proposed model has surpassed the state-of-the-art models remarkably.
arXiv Detail & Related papers (2021-11-10T07:07:54Z) - Topographic VAEs learn Equivariant Capsules [84.33745072274942]
We introduce the Topographic VAE: a novel method for efficiently training deep generative models with topographically organized latent variables.
We show that such a model indeed learns to organize its activations according to salient characteristics such as digit class, width, and style on MNIST.
We demonstrate approximate equivariance to complex transformations, expanding upon the capabilities of existing group equivariant neural networks.
arXiv Detail & Related papers (2021-09-03T09:25:57Z) - Visformer: The Vision-friendly Transformer [105.52122194322592]
We propose a new architecture named Visformer, which is abbreviated from the Vision-friendly Transformer'
With the same computational complexity, Visformer outperforms both the Transformer-based and convolution-based models in terms of ImageNet classification accuracy.
arXiv Detail & Related papers (2021-04-26T13:13:03Z) - Exploring Complementary Strengths of Invariant and Equivariant
Representations for Few-Shot Learning [96.75889543560497]
In many real-world problems, collecting a large number of labeled samples is infeasible.
Few-shot learning is the dominant approach to address this issue, where the objective is to quickly adapt to novel categories in presence of a limited number of samples.
We propose a novel training mechanism that simultaneously enforces equivariance and invariance to a general set of geometric transformations.
arXiv Detail & Related papers (2021-03-01T21:14:33Z) - Partitioning signal classes using transport transforms for data analysis
and machine learning [8.111947517189641]
A new set of transport-based transforms (CDT, R-CDT, LOT) have shown their strength and great potential in various image and data processing tasks.
This paper will serve as an introduction to these transforms and will encourage mathematicians and other researchers to further explore the theoretical underpinnings and algorithmic tools.
arXiv Detail & Related papers (2020-08-08T06:12:10Z) - FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning [64.32306537419498]
We propose a novel learned feature-based refinement and augmentation method that produces a varied set of complex transformations.
These transformations also use information from both within-class and across-class representations that we extract through clustering.
We demonstrate that our method is comparable to current state of art for smaller datasets while being able to scale up to larger datasets.
arXiv Detail & Related papers (2020-07-16T17:55:31Z) - Learned Multi-layer Residual Sparsifying Transform Model for Low-dose CT
Reconstruction [11.470070927586017]
Sparsifying transform learning involves highly efficient sparse coding and operator update steps.
We propose a Multi-layer Residual Sparsifying Transform (MRST) learning model wherein the transform domain residuals are jointly sparsified over layers.
arXiv Detail & Related papers (2020-05-08T02:36:50Z) - On Compositions of Transformations in Contrastive Self-Supervised
Learning [66.15514035861048]
In this paper, we generalize contrastive learning to a wider set of transformations.
We find that being invariant to certain transformations and distinctive to others is critical to learning effective video representations.
arXiv Detail & Related papers (2020-03-09T17:56:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.