q-VAE for Disentangled Representation Learning and Latent Dynamical
Systems
- URL: http://arxiv.org/abs/2003.01852v3
- Date: Thu, 26 Aug 2021 02:15:13 GMT
- Title: q-VAE for Disentangled Representation Learning and Latent Dynamical
Systems
- Authors: Taisuke Kobayashi
- Abstract summary: A variational autoencoder (VAE) derived from Tsallis statistics called q-VAE is proposed.
In the proposed method, a standard VAE is employed to statistically extract latent space hidden in sampled data.
- Score: 8.071506311915396
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A variational autoencoder (VAE) derived from Tsallis statistics called q-VAE
is proposed. In the proposed method, a standard VAE is employed to
statistically extract latent space hidden in sampled data, and this latent
space helps make robots controllable in feasible computational time and cost.
To improve the usefulness of the latent space, this paper focuses on
disentangled representation learning, e.g., $\beta$-VAE, which is the baseline
for it. Starting from a Tsallis statistics perspective, a new lower bound for
the proposed q-VAE is derived to maximize the likelihood of the sampled data,
which can be considered an adaptive $\beta$-VAE with deformed Kullback-Leibler
divergence. To verify the benefits of the proposed q-VAE, a benchmark task to
extract the latent space from the MNIST dataset was performed. The results
demonstrate that the proposed q-VAE improved disentangled representation while
maintaining the reconstruction accuracy of the data. In addition, it relaxes
the independency condition between data, which is demonstrated by learning the
latent dynamics of nonlinear dynamical systems. By combining disentangled
representation, the proposed q-VAE achieves stable and accurate long-term state
prediction from the initial state and the action sequence.
The dataset for hexapod walking is available on IEEE Dataport, doi:
https://dx.doi.org/10.21227/99af-jw71.
Related papers
- RegaVAE: A Retrieval-Augmented Gaussian Mixture Variational Auto-Encoder
for Language Modeling [79.56442336234221]
We introduce RegaVAE, a retrieval-augmented language model built upon the variational auto-encoder (VAE)
It encodes the text corpus into a latent space, capturing current and future information from both source and target text.
Experimental results on various datasets demonstrate significant improvements in text generation quality and hallucination removal.
arXiv Detail & Related papers (2023-10-16T16:42:01Z) - Self-Supervised Dataset Distillation for Transfer Learning [77.4714995131992]
We propose a novel problem of distilling an unlabeled dataset into a set of small synthetic samples for efficient self-supervised learning (SSL)
We first prove that a gradient of synthetic samples with respect to a SSL objective in naive bilevel optimization is textitbiased due to randomness originating from data augmentations or masking.
We empirically validate the effectiveness of our method on various applications involving transfer learning.
arXiv Detail & Related papers (2023-10-10T10:48:52Z) - Symmetric Equilibrium Learning of VAEs [56.56929742714685]
We view variational autoencoders (VAEs) as decoder-encoder pairs, which map distributions in the data space to distributions in the latent space and vice versa.
We propose a Nash equilibrium learning approach, which is symmetric with respect to the encoder and decoder and allows learning VAEs in situations where both the data and the latent distributions are accessible only by sampling.
arXiv Detail & Related papers (2023-07-19T10:27:34Z) - Value function estimation using conditional diffusion models for control [62.27184818047923]
We propose a simple algorithm called Diffused Value Function (DVF)
It learns a joint multi-step model of the environment-robot interaction dynamics using a diffusion model.
We show how DVF can be used to efficiently capture the state visitation measure for multiple controllers.
arXiv Detail & Related papers (2023-06-09T18:40:55Z) - RENs: Relevance Encoding Networks [0.0]
This paper proposes relevance encoding networks (RENs): a novel probabilistic VAE-based framework that uses the automatic relevance determination (ARD) prior in the latent space to learn the data-specific bottleneck dimensionality.
We show that the proposed model learns the relevant latent bottleneck dimensionality without compromising the representation and generation quality of the samples.
arXiv Detail & Related papers (2022-05-25T21:53:48Z) - A Variational Autoencoder for Heterogeneous Temporal and Longitudinal
Data [0.3749861135832073]
Recently proposed extensions to VAEs that can handle temporal and longitudinal data have applications in healthcare, behavioural modelling, and predictive maintenance.
We propose the heterogeneous longitudinal VAE (HL-VAE) that extends the existing temporal and longitudinal VAEs to heterogeneous data.
HL-VAE provides efficient inference for high-dimensional datasets and includes likelihood models for continuous, count, categorical, and ordinal data.
arXiv Detail & Related papers (2022-04-20T10:18:39Z) - Covariate-informed Representation Learning with Samplewise Optimal
Identifiable Variational Autoencoders [15.254297587065595]
Recently proposed identifiable variational autoencoder (iVAE) provides a promising approach for learning latent independent components of the data.
We develop a new approach, co-informed identifiable VAE (CI-iVAE)
In doing so, the objective function enforces the inverse relation, and learned representation contains more information of observations.
arXiv Detail & Related papers (2022-02-09T00:18:33Z) - Learning Summary Statistics for Bayesian Inference with Autoencoders [58.720142291102135]
We use the inner dimension of deep neural network based Autoencoders as summary statistics.
To create an incentive for the encoder to encode all the parameter-related information but not the noise, we give the decoder access to explicit or implicit information that has been used to generate the training data.
arXiv Detail & Related papers (2022-01-28T12:00:31Z) - Reproducible, incremental representation learning with Rosetta VAE [0.0]
Variational autoencoders are among the most popular methods for distilling low-dimensional structure from high-dimensional data.
We introduce the Rosetta VAE, a method of distilling previously learned representations and retraining new models to reproduce and build on prior results.
We demonstrate that the R-VAE reconstructs data as well as the VAE and $beta$-VAE, outperforms both methods in recovery of a target latent space in a sequential training setting.
arXiv Detail & Related papers (2022-01-13T20:45:35Z) - Regularizing Variational Autoencoder with Diversity and Uncertainty
Awareness [61.827054365139645]
Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference.
We propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space.
arXiv Detail & Related papers (2021-10-24T07:58:13Z) - Unsupervised Learning of slow features for Data Efficient Regression [15.73372211126635]
We propose the slow variational autoencoder (S-VAE), an extension to the $beta$-VAE which applies a temporal similarity constraint to the latent representations.
We evaluate the three methods against their data-efficiency on down-stream tasks using a synthetic 2D ball tracking dataset, a dataset from a reinforcent learning environment and a dataset generated using the DeepMind Lab environment.
arXiv Detail & Related papers (2020-12-11T12:19:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.