Covariate-informed Representation Learning with Samplewise Optimal
Identifiable Variational Autoencoders
- URL: http://arxiv.org/abs/2202.04206v1
- Date: Wed, 9 Feb 2022 00:18:33 GMT
- Title: Covariate-informed Representation Learning with Samplewise Optimal
Identifiable Variational Autoencoders
- Authors: Young-geun Kim, Ying Liu, Xuexin Wei
- Abstract summary: Recently proposed identifiable variational autoencoder (iVAE) provides a promising approach for learning latent independent components of the data.
We develop a new approach, co-informed identifiable VAE (CI-iVAE)
In doing so, the objective function enforces the inverse relation, and learned representation contains more information of observations.
- Score: 15.254297587065595
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently proposed identifiable variational autoencoder (iVAE, Khemakhem et
al. (2020)) framework provides a promising approach for learning latent
independent components of the data. Although the identifiability is appealing,
the objective function of iVAE does not enforce the inverse relation between
encoders and decoders. Without the inverse relation, representations from the
encoder in iVAE may not reconstruct observations,i.e., representations lose
information in observations. To overcome this limitation, we develop a new
approach, covariate-informed identifiable VAE (CI-iVAE). Different from
previous iVAE implementations, our method critically leverages the posterior
distribution of latent variables conditioned only on observations. In doing so,
the objective function enforces the inverse relation, and learned
representation contains more information of observations. Furthermore, CI-iVAE
extends the original iVAE objective function to a larger class and finds the
optimal one among them, thus providing a better fit to the data. Theoretically,
our method has tighter evidence lower bounds (ELBOs) than the original iVAE. We
demonstrate that our approach can more reliably learn features of various
synthetic datasets, two benchmark image datasets (EMNIST and Fashion MNIST),
and a large-scale brain imaging dataset for adolescent mental health research.
Related papers
- $α$-TCVAE: On the relationship between Disentanglement and Diversity [21.811889512977924]
In this work, we introduce $alpha$-TCVAE, a variational autoencoder optimized using a novel total correlation (TC) lower bound.
We present quantitative analyses that support the idea that disentangled representations lead to better generative capabilities and diversity.
Our results demonstrate that $alpha$-TCVAE consistently learns more disentangled representations than baselines and generates more diverse observations.
arXiv Detail & Related papers (2024-11-01T13:50:06Z) - Interpretable Sentence Representation with Variational Autoencoders and
Attention [0.685316573653194]
We develop methods to enhance the interpretability of recent representation learning techniques in natural language processing (NLP)
We leverage Variational Autoencoders (VAEs) due to their efficiency in relating observations to latent generative factors.
We build two models with inductive bias to separate information in latent representations into understandable concepts without annotated data.
arXiv Detail & Related papers (2023-05-04T13:16:15Z) - Vector Quantized Wasserstein Auto-Encoder [57.29764749855623]
We study learning deep discrete representations from the generative viewpoint.
We endow discrete distributions over sequences of codewords and learn a deterministic decoder that transports the distribution over the sequences of codewords to the data distribution.
We develop further theories to connect it with the clustering viewpoint of WS distance, allowing us to have a better and more controllable clustering solution.
arXiv Detail & Related papers (2023-02-12T13:51:36Z) - CCLF: A Contrastive-Curiosity-Driven Learning Framework for
Sample-Efficient Reinforcement Learning [56.20123080771364]
We develop a model-agnostic Contrastive-Curiosity-Driven Learning Framework (CCLF) for reinforcement learning.
CCLF fully exploit sample importance and improve learning efficiency in a self-supervised manner.
We evaluate this approach on the DeepMind Control Suite, Atari, and MiniGrid benchmarks.
arXiv Detail & Related papers (2022-05-02T14:42:05Z) - Regularizing Variational Autoencoder with Diversity and Uncertainty
Awareness [61.827054365139645]
Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference.
We propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space.
arXiv Detail & Related papers (2021-10-24T07:58:13Z) - InteL-VAEs: Adding Inductive Biases to Variational Auto-Encoders via
Intermediary Latents [60.785317191131284]
We introduce a simple and effective method for learning VAEs with controllable biases by using an intermediary set of latent variables.
In particular, it allows us to impose desired properties like sparsity or clustering on learned representations.
We show that this, in turn, allows InteL-VAEs to learn both better generative models and representations.
arXiv Detail & Related papers (2021-06-25T16:34:05Z) - From Canonical Correlation Analysis to Self-supervised Graph Neural
Networks [99.44881722969046]
We introduce a conceptually simple yet effective model for self-supervised representation learning with graph data.
We optimize an innovative feature-level objective inspired by classical Canonical Correlation Analysis.
Our method performs competitively on seven public graph datasets.
arXiv Detail & Related papers (2021-06-23T15:55:47Z) - Multivariate Data Explanation by Jumping Emerging Patterns Visualization [78.6363825307044]
We present VAX (multiVariate dAta eXplanation), a new VA method to support the identification and visual interpretation of patterns in multivariate data sets.
Unlike the existing similar approaches, VAX uses the concept of Jumping Emerging Patterns to identify and aggregate several diversified patterns, producing explanations through logic combinations of data variables.
arXiv Detail & Related papers (2021-06-21T13:49:44Z) - Variational Mutual Information Maximization Framework for VAE Latent
Codes with Continuous and Discrete Priors [5.317548969642376]
Variational Autoencoder (VAE) is a scalable method for learning directed latent variable models of complex data.
We propose Variational Mutual Information Maximization Framework for VAE to address this issue.
arXiv Detail & Related papers (2020-06-02T09:05:51Z) - VMI-VAE: Variational Mutual Information Maximization Framework for VAE
With Discrete and Continuous Priors [5.317548969642376]
Variational Autoencoder is a scalable method for learning latent variable models of complex data.
We propose a Variational Mutual Information Maximization Framework for VAE to address this issue.
arXiv Detail & Related papers (2020-05-28T12:44:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.