Isometric Autoencoders
- URL: http://arxiv.org/abs/2006.09289v2
- Date: Sat, 3 Oct 2020 19:20:46 GMT
- Title: Isometric Autoencoders
- Authors: Amos Gropp, Matan Atzmon, Yaron Lipman
- Abstract summary: We advocate an isometry (i.e., local distance preserving) regularizer.
Our regularizer encourages: (i.e., the decoder to be an isometry; and (ii) the encoder to be the decoder's pseudo-inverse, that is, the encoder extends the inverse of the decoder to the ambient space by projection.
- Score: 36.947436313489746
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High dimensional data is often assumed to be concentrated on or near a
low-dimensional manifold. Autoencoders (AE) is a popular technique to learn
representations of such data by pushing it through a neural network with a low
dimension bottleneck while minimizing a reconstruction error. Using high
capacity AE often leads to a large collection of minimizers, many of which
represent a low dimensional manifold that fits the data well but generalizes
poorly.
Two sources of bad generalization are: extrinsic, where the learned manifold
possesses extraneous parts that are far from the data; and intrinsic, where the
encoder and decoder introduce arbitrary distortion in the low dimensional
parameterization. An approach taken to alleviate these issues is to add a
regularizer that favors a particular solution; common regularizers promote
sparsity, small derivatives, or robustness to noise.
In this paper, we advocate an isometry (i.e., local distance preserving)
regularizer. Specifically, our regularizer encourages: (i) the decoder to be an
isometry; and (ii) the encoder to be the decoder's pseudo-inverse, that is, the
encoder extends the inverse of the decoder to the ambient space by orthogonal
projection. In a nutshell, (i) and (ii) fix both intrinsic and extrinsic
degrees of freedom and provide a non-linear generalization to principal
component analysis (PCA). Experimenting with the isometry regularizer on
dimensionality reduction tasks produces useful low-dimensional data
representations.
Related papers
- Rank Reduction Autoencoders -- Enhancing interpolation on nonlinear manifolds [3.180674374101366]
Rank Reduction Autoencoder (RRAE) is an autoencoder with an enlarged latent space.
Two formulations are presented, a strong and a weak one, that build a reduced basis accurately representing the latent space.
We show the efficiency of our formulations by using them for tasks and comparing the results to other autoencoders.
arXiv Detail & Related papers (2024-05-22T20:33:09Z) - Compression of Structured Data with Autoencoders: Provable Benefit of
Nonlinearities and Depth [83.15263499262824]
We prove that gradient descent converges to a solution that completely disregards the sparse structure of the input.
We show how to improve upon Gaussian performance for the compression of sparse data by adding a denoising function to a shallow architecture.
We validate our findings on image datasets, such as CIFAR-10 and MNIST.
arXiv Detail & Related papers (2024-02-07T16:32:29Z) - Learning Low-Rank Latent Spaces with Simple Deterministic Autoencoder:
Theoretical and Empirical Insights [1.246305060872372]
Low-Rank Autoencoder (LoRAE) is a simple autoencoder extension that learns low-rank latent space.
Our model's superiority shines through various tasks such as image generation and downstream classification.
arXiv Detail & Related papers (2023-10-24T21:24:27Z) - Geometric Autoencoders -- What You See is What You Decode [12.139222986297263]
We propose a differential geometric perspective on the decoder, leading to insightful diagnostics for an embedding's distortion, and a new regularizer mitigating such distortion.
Our Geometric Autoencoder'' avoids stretching the embedding spuriously, so that the visualization captures the data structure more faithfully.
arXiv Detail & Related papers (2023-06-30T13:24:31Z) - Deep Nonparametric Estimation of Intrinsic Data Structures by Chart
Autoencoders: Generalization Error and Robustness [11.441464617936173]
We employ chart autoencoders to encode data into low-dimensional latent features on a collection of charts.
By training autoencoders, we show that chart autoencoders can effectively denoise the input data with normal noise.
As a special case, our theory also applies to classical autoencoders, as long as the data manifold has a global parametrization.
arXiv Detail & Related papers (2023-03-17T10:01:32Z) - Convergent autoencoder approximation of low bending and low distortion
manifold embeddings [5.5711773076846365]
We propose and analyze a novel regularization for learning the encoder component of an autoencoder.
The loss functional is computed via Monte Carlo integration with different sampling strategies for pairs of points on the input manifold.
Our main theorem identifies a loss functional of the embedding map as the $Gamma$-limit of the sampling-dependent loss functionals.
arXiv Detail & Related papers (2022-08-22T10:31:31Z) - Toward a Geometrical Understanding of Self-supervised Contrastive
Learning [55.83778629498769]
Self-supervised learning (SSL) is one of the premier techniques to create data representations that are actionable for transfer learning in the absence of human annotations.
Mainstream SSL techniques rely on a specific deep neural network architecture with two cascaded neural networks: the encoder and the projector.
In this paper, we investigate how the strength of the data augmentation policies affects the data embedding.
arXiv Detail & Related papers (2022-05-13T23:24:48Z) - Reducing Redundancy in the Bottleneck Representation of the Autoencoders [98.78384185493624]
Autoencoders are a type of unsupervised neural networks, which can be used to solve various tasks.
We propose a scheme to explicitly penalize feature redundancies in the bottleneck representation.
We tested our approach across different tasks: dimensionality reduction using three different dataset, image compression using the MNIST dataset, and image denoising using fashion MNIST.
arXiv Detail & Related papers (2022-02-09T18:48:02Z) - A Local Similarity-Preserving Framework for Nonlinear Dimensionality
Reduction with Neural Networks [56.068488417457935]
We propose a novel local nonlinear approach named Vec2vec for general purpose dimensionality reduction.
To train the neural network, we build the neighborhood similarity graph of a matrix and define the context of data points.
Experiments of data classification and clustering on eight real datasets show that Vec2vec is better than several classical dimensionality reduction methods in the statistical hypothesis test.
arXiv Detail & Related papers (2021-03-10T23:10:47Z) - Manifold Learning via Manifold Deflation [105.7418091051558]
dimensionality reduction methods provide a valuable means to visualize and interpret high-dimensional data.
Many popular methods can fail dramatically, even on simple two-dimensional Manifolds.
This paper presents an embedding method for a novel, incremental tangent space estimator that incorporates global structure as coordinates.
Empirically, we show our algorithm recovers novel and interesting embeddings on real-world and synthetic datasets.
arXiv Detail & Related papers (2020-07-07T10:04:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.