Autoencoder Image Interpolation by Shaping the Latent Space
- URL: http://arxiv.org/abs/2008.01487v2
- Date: Thu, 22 Oct 2020 02:03:08 GMT
- Title: Autoencoder Image Interpolation by Shaping the Latent Space
- Authors: Alon Oring and Zohar Yakhini and Yacov Hel-Or
- Abstract summary: Autoencoders represent an effective approach for computing the underlying factors characterizing datasets of different types.
We propose a regularization technique that shapes the latent representation to follow a manifold consistent with the training images.
- Score: 12.482988592988868
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autoencoders represent an effective approach for computing the underlying
factors characterizing datasets of different types. The latent representation
of autoencoders have been studied in the context of enabling interpolation
between data points by decoding convex combinations of latent vectors. This
interpolation, however, often leads to artifacts or produces unrealistic
results during reconstruction. We argue that these incongruities are due to the
structure of the latent space and because such naively interpolated latent
vectors deviate from the data manifold. In this paper, we propose a
regularization technique that shapes the latent representation to follow a
manifold that is consistent with the training images and that drives the
manifold to be smooth and locally convex. This regularization not only enables
faithful interpolation between data points, as we show herein, but can also be
used as a general regularization technique to avoid overfitting or to produce
new samples for data augmentation.
Related papers
- Thinner Latent Spaces: Detecting dimension and imposing invariance through autoencoder gradient constraints [9.380902608139902]
We show that orthogonality relations within the latent layer of the network can be leveraged to infer the intrinsic dimensionality of nonlinear manifold data sets.
We outline the relevant theory relying on differential geometry, and describe the corresponding gradient-descent optimization algorithm.
arXiv Detail & Related papers (2024-08-28T20:56:35Z) - Improving embedding of graphs with missing data by soft manifolds [51.425411400683565]
The reliability of graph embeddings depends on how much the geometry of the continuous space matches the graph structure.
We introduce a new class of manifold, named soft manifold, that can solve this situation.
Using soft manifold for graph embedding, we can provide continuous spaces to pursue any task in data analysis over complex datasets.
arXiv Detail & Related papers (2023-11-29T12:48:33Z) - Gradient-Based Feature Learning under Structured Data [57.76552698981579]
In the anisotropic setting, the commonly used spherical gradient dynamics may fail to recover the true direction.
We show that appropriate weight normalization that is reminiscent of batch normalization can alleviate this issue.
In particular, under the spiked model with a suitably large spike, the sample complexity of gradient-based training can be made independent of the information exponent.
arXiv Detail & Related papers (2023-09-07T16:55:50Z) - Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Relative representations enable zero-shot latent space communication [19.144630518400604]
Neural networks embed the geometric structure of a data manifold lying in a high-dimensional space into latent representations.
We show how neural architectures can leverage these relative representations to guarantee, in practice, latent isometry invariance.
arXiv Detail & Related papers (2022-09-30T12:37:03Z) - Convergent autoencoder approximation of low bending and low distortion
manifold embeddings [5.5711773076846365]
We propose and analyze a novel regularization for learning the encoder component of an autoencoder.
The loss functional is computed via Monte Carlo integration with different sampling strategies for pairs of points on the input manifold.
Our main theorem identifies a loss functional of the embedding map as the $Gamma$-limit of the sampling-dependent loss functionals.
arXiv Detail & Related papers (2022-08-22T10:31:31Z) - Learning low bending and low distortion manifold embeddings [1.8046244926068666]
The encoder provides an embedding from the input data manifold into a latent space which may then be used for further processing.
In this article, the embedding into latent space is regularized via a loss function that promotes an as isometric and as flat embedding.
The loss functional is computed via a Monte Carlo integration which is shown to be consistent with a geometric loss functional defined directly on the embedding map.
arXiv Detail & Related papers (2021-04-27T13:51:12Z) - Manifold Learning via Manifold Deflation [105.7418091051558]
dimensionality reduction methods provide a valuable means to visualize and interpret high-dimensional data.
Many popular methods can fail dramatically, even on simple two-dimensional Manifolds.
This paper presents an embedding method for a novel, incremental tangent space estimator that incorporates global structure as coordinates.
Empirically, we show our algorithm recovers novel and interesting embeddings on real-world and synthetic datasets.
arXiv Detail & Related papers (2020-07-07T10:04:28Z) - Spatially Adaptive Inference with Stochastic Feature Sampling and
Interpolation [72.40827239394565]
We propose to compute features only at sparsely sampled locations.
We then densely reconstruct the feature map with an efficient procedure.
The presented network is experimentally shown to save substantial computation while maintaining accuracy over a variety of computer vision tasks.
arXiv Detail & Related papers (2020-03-19T15:36:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.