Learning Autoencoders with Relational Regularization
- URL: http://arxiv.org/abs/2002.02913v4
- Date: Fri, 26 Jun 2020 01:05:36 GMT
- Title: Learning Autoencoders with Relational Regularization
- Authors: Hongteng Xu, Dixin Luo, Ricardo Henao, Svati Shah, Lawrence Carin
- Abstract summary: A new framework is proposed for learning autoencoders of data distributions.
We minimize the discrepancy between the model and target distributions, with a emphrelational regularization
We implement the framework with two scalable algorithms, making it applicable for both probabilistic and deterministic autoencoders.
- Score: 89.53065887608088
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A new algorithmic framework is proposed for learning autoencoders of data
distributions. We minimize the discrepancy between the model and target
distributions, with a \emph{relational regularization} on the learnable latent
prior. This regularization penalizes the fused Gromov-Wasserstein (FGW)
distance between the latent prior and its corresponding posterior, allowing one
to flexibly learn a structured prior distribution associated with the
generative model. Moreover, it helps co-training of multiple autoencoders even
if they have heterogeneous architectures and incomparable latent spaces. We
implement the framework with two scalable algorithms, making it applicable for
both probabilistic and deterministic autoencoders. Our relational regularized
autoencoder (RAE) outperforms existing methods, $e.g.$, the variational
autoencoder, Wasserstein autoencoder, and their variants, on generating images.
Additionally, our relational co-training strategy for autoencoders achieves
encouraging results in both synthesis and real-world multi-view learning tasks.
The code is at https://github.com/HongtengXu/ Relational-AutoEncoders.
Related papers
- Triple-Encoders: Representations That Fire Together, Wire Together [51.15206713482718]
Contrastive Learning is a representation learning method that encodes relative distances between utterances into the embedding space via a bi-encoder.
This study introduces triple-encoders, which efficiently compute distributed utterance mixtures from these independently encoded utterances.
We find that triple-encoders lead to a substantial improvement over bi-encoders, and even to better zero-shot generalization than single-vector representation models.
arXiv Detail & Related papers (2024-02-19T18:06:02Z) - Symmetric Equilibrium Learning of VAEs [56.56929742714685]
We view variational autoencoders (VAEs) as decoder-encoder pairs, which map distributions in the data space to distributions in the latent space and vice versa.
We propose a Nash equilibrium learning approach, which is symmetric with respect to the encoder and decoder and allows learning VAEs in situations where both the data and the latent distributions are accessible only by sampling.
arXiv Detail & Related papers (2023-07-19T10:27:34Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - Benign Autoencoders [0.0]
We formalize the problem of finding the optimal encoder-decoder pair and characterize its solution, which we name the "benign autoencoder" (BAE)
We prove that BAE projects data onto a manifold whose dimension is the optimal compressibility dimension of the generative problem.
As an illustration, we show how BAE can find optimal, low-dimensional latent representations that improve the performance of a discriminator under a distribution shift.
arXiv Detail & Related papers (2022-10-02T21:36:27Z) - String-based Molecule Generation via Multi-decoder VAE [56.465033997245776]
We investigate the problem of string-based molecular generation via variational autoencoders (VAEs)
We propose a simple, yet effective idea to improve the performance of VAE for the task.
In our experiments, the proposed VAE model particularly performs well for generating a sample from out-of-domain distribution.
arXiv Detail & Related papers (2022-08-23T03:56:30Z) - Disentangling Autoencoders (DAE) [0.0]
We propose a novel framework for autoencoders based on the principles of symmetry transformations in group-theory.
We believe that this model leads a new field for disentanglement learning based on autoencoders without regularizers.
arXiv Detail & Related papers (2022-02-20T22:59:13Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - Deterministic Decoding for Discrete Data in Variational Autoencoders [5.254093731341154]
We study a VAE model with a deterministic decoder (DD-VAE) for sequential data that selects the highest-scoring tokens instead of sampling.
We demonstrate the performance of DD-VAE on multiple datasets, including molecular generation and optimization problems.
arXiv Detail & Related papers (2020-03-04T16:36:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.