Rank Reduction Autoencoders
- URL: http://arxiv.org/abs/2405.13980v3
- Date: Tue, 25 Mar 2025 09:41:34 GMT
- Title: Rank Reduction Autoencoders
- Authors: Jad Mounayer, Sebastian Rodriguez, Chady Ghnatios, Charbel Farhat, Francisco Chinesta,
- Abstract summary: We introduce a new class of deterministic autoencoders, Rank Reduction Autoencoders (RRAEs)<n>In RRAEs, the bottleneck is defined by the rank of the latent matrix, thereby alleviating the dependence of the encoder/decoder architecture on the bottleneck size.<n>We empirically demonstrate that both RRAEs and aRRAEs are stable, scalable, and reliable.
- Score: 3.180674374101366
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The choice of an appropriate bottleneck dimension and the application of effective regularization are both essential for Autoencoders to learn meaningful representations from unlabeled data. In this paper, we introduce a new class of deterministic autoencoders, Rank Reduction Autoencoders (RRAEs), which regularize their latent spaces by employing a truncated singular value decomposition (SVD) during training. In RRAEs, the bottleneck is defined by the rank of the latent matrix, thereby alleviating the dependence of the encoder/decoder architecture on the bottleneck size. This approach enabled us to propose an adaptive algorithm (aRRAEs) that efficiently determines the optimal bottleneck size during training. We empirically demonstrate that both RRAEs and aRRAEs are stable, scalable, and reliable, as they do not introduce any additional training hyperparameters. We evaluate our proposed architecture on a synthetic data set, as well as on MNIST, Fashion MNIST, and CelebA. Our results show that RRAEs offer several advantages over Vanilla AEs with both large and small latent spaces, and outperform other regularizing AE architectures.
Related papers
- Embedding-Free Transformer with Inference Spatial Reduction for Efficient Semantic Segmentation [15.377463849213033]
EFA is a novel global context modeling mechanism that focuses on functioning the global non-linearity.
Our ISR method reduces the key-value resolution at the inference phase, which can mitigate the computation-performance trade-off gap.
EDAFormer shows the state-of-the-art performance with the efficient computation compared to the existing transformer-based semantic segmentation models.
arXiv Detail & Related papers (2024-07-24T13:24:25Z) - Compression of Structured Data with Autoencoders: Provable Benefit of
Nonlinearities and Depth [83.15263499262824]
We prove that gradient descent converges to a solution that completely disregards the sparse structure of the input.
We show how to improve upon Gaussian performance for the compression of sparse data by adding a denoising function to a shallow architecture.
We validate our findings on image datasets, such as CIFAR-10 and MNIST.
arXiv Detail & Related papers (2024-02-07T16:32:29Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - Uncertainty-Aware Source-Free Adaptive Image Super-Resolution with Wavelet Augmentation Transformer [60.31021888394358]
Unsupervised Domain Adaptation (UDA) can effectively address domain gap issues in real-world image Super-Resolution (SR)
We propose a SOurce-free Domain Adaptation framework for image SR (SODA-SR) to address this issue, i.e., adapt a source-trained model to a target domain with only unlabeled target data.
arXiv Detail & Related papers (2023-03-31T03:14:44Z) - Fundamental Limits of Two-layer Autoencoders, and Achieving Them with
Gradient Methods [91.54785981649228]
This paper focuses on non-linear two-layer autoencoders trained in the challenging proportional regime.
Our results characterize the minimizers of the population risk, and show that such minimizers are achieved by gradient methods.
For the special case of a sign activation function, our analysis establishes the fundamental limits for the lossy compression of Gaussian sources via (shallow) autoencoders.
arXiv Detail & Related papers (2022-12-27T12:37:34Z) - Benign Autoencoders [0.0]
We formalize the problem of finding the optimal encoder-decoder pair and characterize its solution, which we name the "benign autoencoder" (BAE)
We prove that BAE projects data onto a manifold whose dimension is the optimal compressibility dimension of the generative problem.
As an illustration, we show how BAE can find optimal, low-dimensional latent representations that improve the performance of a discriminator under a distribution shift.
arXiv Detail & Related papers (2022-10-02T21:36:27Z) - Dimensionality reduction with variational encoders based on subsystem
purification [0.27998963147546135]
We propose an alternative approach to variational autoencoders to reduce the dimensionality of states represented in higher dimensional Hilbert spaces.
We make use of Bars and Stripes dataset (BAS) for an 8x8 grid to create efficient encoding states and report a classification accuracy of 95%.
arXiv Detail & Related papers (2022-09-20T15:24:11Z) - Closing the gap: Exact maximum likelihood training of generative
autoencoders using invertible layers [7.76925617801895]
We show that VAE-style autoencoders can be constructed using invertible layers, which offer a tractable exact likelihood without the need for regularization terms.
This is achieved while leaving complete freedom in the choice of encoder, decoder and prior architectures.
We show that the approach results in strikingly higher performance than architecturally equivalent VAEs in term of log-likelihood, sample quality and denoising performance.
arXiv Detail & Related papers (2022-05-19T13:16:09Z) - Reducing Redundancy in the Bottleneck Representation of the Autoencoders [98.78384185493624]
Autoencoders are a type of unsupervised neural networks, which can be used to solve various tasks.
We propose a scheme to explicitly penalize feature redundancies in the bottleneck representation.
We tested our approach across different tasks: dimensionality reduction using three different dataset, image compression using the MNIST dataset, and image denoising using fashion MNIST.
arXiv Detail & Related papers (2022-02-09T18:48:02Z) - Momentum Contrastive Autoencoder: Using Contrastive Learning for Latent
Space Distribution Matching in WAE [51.09507030387935]
Wasserstein autoencoder (WAE) shows that matching two distributions is equivalent to minimizing a simple autoencoder (AE) loss under the constraint that the latent space of this AE matches a pre-specified prior distribution.
We propose to use the contrastive learning framework that has been shown to be effective for self-supervised representation learning, as a means to resolve this problem.
We show that using the contrastive learning framework to optimize the WAE loss achieves faster convergence and more stable optimization compared with existing popular algorithms for WAE.
arXiv Detail & Related papers (2021-10-19T22:55:47Z) - Highly Parallel Autoregressive Entity Linking with Discriminative
Correction [51.947280241185]
We propose a very efficient approach that parallelizes autoregressive linking across all potential mentions.
Our model is >70 times faster and more accurate than the previous generative method.
arXiv Detail & Related papers (2021-09-08T17:28:26Z) - Adaptive Anomaly Detection for Internet of Things in Hierarchical Edge
Computing: A Contextual-Bandit Approach [81.5261621619557]
We propose an adaptive anomaly detection scheme with hierarchical edge computing (HEC)
We first construct multiple anomaly detection DNN models with increasing complexity, and associate each of them to a corresponding HEC layer.
Then, we design an adaptive model selection scheme that is formulated as a contextual-bandit problem and solved by using a reinforcement learning policy network.
arXiv Detail & Related papers (2021-08-09T08:45:47Z) - Symmetric Wasserstein Autoencoders [22.196642357767338]
We introduce a new family of generative autoencoders with a learnable prior, called Symmetric Wasserstein Autoencoders (SWAEs)
We propose to symmetrically match the joint distributions of the observed data and the latent representation induced by the encoder and the decoder.
We empirically show the superior performance of SWAEs over the state-of-the-art generative autoencoders in terms of classification, reconstruction, and generation.
arXiv Detail & Related papers (2021-06-24T13:56:02Z) - Dual Adversarial Auto-Encoders for Clustering [152.84443014554745]
We propose Dual Adversarial Auto-encoder (Dual-AAE) for unsupervised clustering.
By performing variational inference on the objective function of Dual-AAE, we derive a new reconstruction loss which can be optimized by training a pair of Auto-encoders.
Experiments on four benchmarks show that Dual-AAE achieves superior performance over state-of-the-art clustering methods.
arXiv Detail & Related papers (2020-08-23T13:16:34Z) - A New Modal Autoencoder for Functionally Independent Feature Extraction [6.690183908967779]
A new modal autoencoder (MAE) is proposed by othogonalising the columns of the readout weight matrix.
The results were validated on the MNIST variations and USPS classification benchmark suite.
The new MAE introduces a very simple training principle for autoencoders and could be promising for the pre-training of deep neural networks.
arXiv Detail & Related papers (2020-06-25T13:25:10Z) - Isometric Autoencoders [36.947436313489746]
We advocate an isometry (i.e., local distance preserving) regularizer.
Our regularizer encourages: (i.e., the decoder to be an isometry; and (ii) the encoder to be the decoder's pseudo-inverse, that is, the encoder extends the inverse of the decoder to the ambient space by projection.
arXiv Detail & Related papers (2020-06-16T16:31:57Z) - Learning Autoencoders with Relational Regularization [89.53065887608088]
A new framework is proposed for learning autoencoders of data distributions.
We minimize the discrepancy between the model and target distributions, with a emphrelational regularization
We implement the framework with two scalable algorithms, making it applicable for both probabilistic and deterministic autoencoders.
arXiv Detail & Related papers (2020-02-07T17:27:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.