Symmetric Wasserstein Autoencoders
- URL: http://arxiv.org/abs/2106.13024v1
- Date: Thu, 24 Jun 2021 13:56:02 GMT
- Title: Symmetric Wasserstein Autoencoders
- Authors: Sun Sun and Hongyu Guo
- Abstract summary: We introduce a new family of generative autoencoders with a learnable prior, called Symmetric Wasserstein Autoencoders (SWAEs)
We propose to symmetrically match the joint distributions of the observed data and the latent representation induced by the encoder and the decoder.
We empirically show the superior performance of SWAEs over the state-of-the-art generative autoencoders in terms of classification, reconstruction, and generation.
- Score: 22.196642357767338
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Leveraging the framework of Optimal Transport, we introduce a new family of
generative autoencoders with a learnable prior, called Symmetric Wasserstein
Autoencoders (SWAEs). We propose to symmetrically match the joint distributions
of the observed data and the latent representation induced by the encoder and
the decoder. The resulting algorithm jointly optimizes the modelling losses in
both the data and the latent spaces with the loss in the data space leading to
the denoising effect. With the symmetric treatment of the data and the latent
representation, the algorithm implicitly preserves the local structure of the
data in the latent space. To further improve the quality of the latent
representation, we incorporate a reconstruction loss into the objective, which
significantly benefits both the generation and reconstruction. We empirically
show the superior performance of SWAEs over the state-of-the-art generative
autoencoders in terms of classification, reconstruction, and generation.
Related papers
- Variational Encoder-Decoders for Learning Latent Representations of Physical Systems [0.0]
We present a framework for learning data-driven low-dimensional representations of a physical system.
We successfully model the hydraulic pressure response at observation wells of a groundwater flow model.
arXiv Detail & Related papers (2024-12-06T16:46:48Z) - $ε$-VAE: Denoising as Visual Decoding [61.29255979767292]
In generative modeling, tokenization simplifies complex data into compact, structured representations, creating a more efficient, learnable space.
Current visual tokenization methods rely on a traditional autoencoder framework, where the encoder compresses data into latent representations, and the decoder reconstructs the original input.
We propose denoising as decoding, shifting from single-step reconstruction to iterative refinement. Specifically, we replace the decoder with a diffusion process that iteratively refines noise to recover the original image, guided by the latents provided by the encoder.
We evaluate our approach by assessing both reconstruction (rFID) and generation quality (
arXiv Detail & Related papers (2024-10-05T08:27:53Z) - Hierarchical Features Matter: A Deep Exploration of GAN Priors for Improved Dataset Distillation [51.44054828384487]
We propose a novel parameterization method dubbed Hierarchical Generative Latent Distillation (H-GLaD)
This method systematically explores hierarchical layers within the generative adversarial networks (GANs)
In addition, we introduce a novel class-relevant feature distance metric to alleviate the computational burden associated with synthetic dataset evaluation.
arXiv Detail & Related papers (2024-06-09T09:15:54Z) - Rank Reduction Autoencoders [3.180674374101366]
We introduce a new class of deterministic autoencoders, Rank Reduction Autoencoders (RRAEs)
In RRAEs, the bottleneck is defined by the rank of the latent matrix, thereby alleviating the dependence of the encoder/decoder architecture on the bottleneck size.
We empirically demonstrate that both RRAEs and aRRAEs are stable, scalable, and reliable.
arXiv Detail & Related papers (2024-05-22T20:33:09Z) - Koopcon: A new approach towards smarter and less complex learning [13.053285552524052]
In the era of big data, the sheer volume and complexity of datasets pose significant challenges in machine learning.
This paper introduces an innovative Autoencoder-based dataset condensation model backed by Koopman operator theory.
Inspired by the predictive coding mechanisms of the human brain, our model leverages a novel approach to encode and reconstruct data.
arXiv Detail & Related papers (2024-05-22T17:47:14Z) - UGMAE: A Unified Framework for Graph Masked Autoencoders [67.75493040186859]
We propose UGMAE, a unified framework for graph masked autoencoders.
We first develop an adaptive feature mask generator to account for the unique significance of nodes.
We then design a ranking-based structure reconstruction objective joint with feature reconstruction to capture holistic graph information.
arXiv Detail & Related papers (2024-02-12T19:39:26Z) - Distributional Reduction: Unifying Dimensionality Reduction and Clustering with Gromov-Wasserstein [56.62376364594194]
Unsupervised learning aims to capture the underlying structure of potentially large and high-dimensional datasets.
In this work, we revisit these approaches under the lens of optimal transport and exhibit relationships with the Gromov-Wasserstein problem.
This unveils a new general framework, called distributional reduction, that recovers DR and clustering as special cases and allows addressing them jointly within a single optimization problem.
arXiv Detail & Related papers (2024-02-03T19:00:19Z) - Curvature regularization for Non-line-of-sight Imaging from
Under-sampled Data [5.591221518341613]
Non-line-of-sight (NLOS) imaging aims to reconstruct the three-dimensional hidden scenes from the data measured in the line-of-sight.
We propose novel NLOS reconstruction models based on curvature regularization.
We evaluate the proposed algorithms on both synthetic and real datasets.
arXiv Detail & Related papers (2023-01-01T14:10:43Z) - Benign Autoencoders [0.0]
We formalize the problem of finding the optimal encoder-decoder pair and characterize its solution, which we name the "benign autoencoder" (BAE)
We prove that BAE projects data onto a manifold whose dimension is the optimal compressibility dimension of the generative problem.
As an illustration, we show how BAE can find optimal, low-dimensional latent representations that improve the performance of a discriminator under a distribution shift.
arXiv Detail & Related papers (2022-10-02T21:36:27Z) - Deep Equilibrium Assisted Block Sparse Coding of Inter-dependent
Signals: Application to Hyperspectral Imaging [71.57324258813675]
A dataset of inter-dependent signals is defined as a matrix whose columns demonstrate strong dependencies.
A neural network is employed to act as structure prior and reveal the underlying signal interdependencies.
Deep unrolling and Deep equilibrium based algorithms are developed, forming highly interpretable and concise deep-learning-based architectures.
arXiv Detail & Related papers (2022-03-29T21:00:39Z) - Momentum Contrastive Autoencoder: Using Contrastive Learning for Latent
Space Distribution Matching in WAE [51.09507030387935]
Wasserstein autoencoder (WAE) shows that matching two distributions is equivalent to minimizing a simple autoencoder (AE) loss under the constraint that the latent space of this AE matches a pre-specified prior distribution.
We propose to use the contrastive learning framework that has been shown to be effective for self-supervised representation learning, as a means to resolve this problem.
We show that using the contrastive learning framework to optimize the WAE loss achieves faster convergence and more stable optimization compared with existing popular algorithms for WAE.
arXiv Detail & Related papers (2021-10-19T22:55:47Z) - Encoded Prior Sliced Wasserstein AutoEncoder for learning latent
manifold representations [0.7614628596146599]
We introduce an Encoded Prior Sliced Wasserstein AutoEncoder.
An additional prior-encoder network learns an embedding of the data manifold.
We show that the prior encodes the geometry underlying the data unlike conventional autoencoders.
arXiv Detail & Related papers (2020-10-02T14:58:54Z) - On the Encoder-Decoder Incompatibility in Variational Text Modeling and
Beyond [82.18770740564642]
Variational autoencoders (VAEs) combine latent variables with amortized variational inference.
We observe the encoder-decoder incompatibility that leads to poor parameterizations of the data manifold.
We propose Coupled-VAE, which couples a VAE model with a deterministic autoencoder with the same structure.
arXiv Detail & Related papers (2020-04-20T10:34:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.