Learning Low-Rank Latent Spaces with Simple Deterministic Autoencoder:
Theoretical and Empirical Insights
- URL: http://arxiv.org/abs/2310.16194v1
- Date: Tue, 24 Oct 2023 21:24:27 GMT
- Title: Learning Low-Rank Latent Spaces with Simple Deterministic Autoencoder:
Theoretical and Empirical Insights
- Authors: Alokendu Mazumder, Tirthajit Baruah, Bhartendu Kumar, Rishab Sharma,
Vishwajeet Pattanaik, Punit Rathore
- Abstract summary: Low-Rank Autoencoder (LoRAE) is a simple autoencoder extension that learns low-rank latent space.
Our model's superiority shines through various tasks such as image generation and downstream classification.
- Score: 1.246305060872372
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The autoencoder is an unsupervised learning paradigm that aims to create a
compact latent representation of data by minimizing the reconstruction loss.
However, it tends to overlook the fact that most data (images) are embedded in
a lower-dimensional space, which is crucial for effective data representation.
To address this limitation, we propose a novel approach called Low-Rank
Autoencoder (LoRAE). In LoRAE, we incorporated a low-rank regularizer to
adaptively reconstruct a low-dimensional latent space while preserving the
basic objective of an autoencoder. This helps embed the data in a
lower-dimensional space while preserving important information. It is a simple
autoencoder extension that learns low-rank latent space. Theoretically, we
establish a tighter error bound for our model. Empirically, our model's
superiority shines through various tasks such as image generation and
downstream classification. Both theoretical and practical outcomes highlight
the importance of acquiring low-dimensional embeddings.
Related papers
- Rank Reduction Autoencoders -- Enhancing interpolation on nonlinear manifolds [3.180674374101366]
Rank Reduction Autoencoder (RRAE) is an autoencoder with an enlarged latent space.
Two formulations are presented, a strong and a weak one, that build a reduced basis accurately representing the latent space.
We show the efficiency of our formulations by using them for tasks and comparing the results to other autoencoders.
arXiv Detail & Related papers (2024-05-22T20:33:09Z) - Compression of Structured Data with Autoencoders: Provable Benefit of
Nonlinearities and Depth [83.15263499262824]
We prove that gradient descent converges to a solution that completely disregards the sparse structure of the input.
We show how to improve upon Gaussian performance for the compression of sparse data by adding a denoising function to a shallow architecture.
We validate our findings on image datasets, such as CIFAR-10 and MNIST.
arXiv Detail & Related papers (2024-02-07T16:32:29Z) - Are We Using Autoencoders in a Wrong Way? [3.110260251019273]
Autoencoders are used for dimensionality reduction, anomaly detection and feature extraction.
We revisited the standard training for the undercomplete Autoencoder modifying the shape of the latent space.
We also explored the behaviour of the latent space in the case of reconstruction of a random sample from the whole dataset.
arXiv Detail & Related papers (2023-09-04T11:22:43Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - Autoencoders with Intrinsic Dimension Constraints for Learning Low
Dimensional Image Representations [27.40298734517967]
We propose a novel deep representation learning approach with autoencoder, which incorporates regularization of the global and local ID constraints into the reconstruction of data representations.
This approach not only preserves the global manifold structure of the whole dataset, but also maintains the local manifold structure of the feature maps of each point.
arXiv Detail & Related papers (2023-04-16T03:43:08Z) - Discrete Key-Value Bottleneck [95.61236311369821]
Deep neural networks perform well on classification tasks where data streams are i.i.d. and labeled data is abundant.
One powerful approach that has addressed this challenge involves pre-training of large encoders on volumes of readily available data, followed by task-specific tuning.
Given a new task, however, updating the weights of these encoders is challenging as a large number of weights needs to be fine-tuned, and as a result, they forget information about the previous tasks.
We propose a model architecture to address this issue, building upon a discrete bottleneck containing pairs of separate and learnable key-value codes.
arXiv Detail & Related papers (2022-07-22T17:52:30Z) - Self-Supervised Point Cloud Representation Learning with Occlusion
Auto-Encoder [63.77257588569852]
We present 3D Occlusion Auto-Encoder (3D-OAE) for learning representations for point clouds.
Our key idea is to randomly occlude some local patches of the input point cloud and establish the supervision via recovering the occluded patches.
In contrast with previous methods, our 3D-OAE can remove a large proportion of patches and predict them only with a small number of visible patches.
arXiv Detail & Related papers (2022-03-26T14:06:29Z) - Secrets of 3D Implicit Object Shape Reconstruction in the Wild [92.5554695397653]
Reconstructing high-fidelity 3D objects from sparse, partial observation is crucial for various applications in computer vision, robotics, and graphics.
Recent neural implicit modeling methods show promising results on synthetic or dense datasets.
But, they perform poorly on real-world data that is sparse and noisy.
This paper analyzes the root cause of such deficient performance of a popular neural implicit model.
arXiv Detail & Related papers (2021-01-18T03:24:48Z) - Implicit Rank-Minimizing Autoencoder [21.2045061949013]
Implicit Rank-Minimizing Autoencoder (IRMAE) is simple, deterministic, and learns compact latent spaces.
We demonstrate the validity of the method on several image generation and representation learning tasks.
arXiv Detail & Related papers (2020-10-01T20:48:52Z) - Isometric Autoencoders [36.947436313489746]
We advocate an isometry (i.e., local distance preserving) regularizer.
Our regularizer encourages: (i.e., the decoder to be an isometry; and (ii) the encoder to be the decoder's pseudo-inverse, that is, the encoder extends the inverse of the decoder to the ambient space by projection.
arXiv Detail & Related papers (2020-06-16T16:31:57Z) - Robust Large-Margin Learning in Hyperbolic Space [64.42251583239347]
We present the first theoretical guarantees for learning a classifier in hyperbolic rather than Euclidean space.
We provide an algorithm to efficiently learn a large-margin hyperplane, relying on the careful injection of adversarial examples.
We prove that for hierarchical data that embeds well into hyperbolic space, the low embedding dimension ensures superior guarantees.
arXiv Detail & Related papers (2020-04-11T19:11:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.