VMI-VAE: Variational Mutual Information Maximization Framework for VAE
With Discrete and Continuous Priors
- URL: http://arxiv.org/abs/2005.13953v1
- Date: Thu, 28 May 2020 12:44:23 GMT
- Title: VMI-VAE: Variational Mutual Information Maximization Framework for VAE
With Discrete and Continuous Priors
- Authors: Andriy Serdega, Dae-Shik Kim
- Abstract summary: Variational Autoencoder is a scalable method for learning latent variable models of complex data.
We propose a Variational Mutual Information Maximization Framework for VAE to address this issue.
- Score: 5.317548969642376
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Variational Autoencoder is a scalable method for learning latent variable
models of complex data. It employs a clear objective that can be easily
optimized. However, it does not explicitly measure the quality of learned
representations. We propose a Variational Mutual Information Maximization
Framework for VAE to address this issue. It provides an objective that
maximizes the mutual information between latent codes and observations. The
objective acts as a regularizer that forces VAE to not ignore the latent code
and allows one to select particular components of it to be most informative
with respect to the observations. On top of that, the proposed framework
provides a way to evaluate mutual information between latent codes and
observations for a fixed VAE model.
Related papers
- Self-Supervised Representation Learning with Meta Comprehensive
Regularization [11.387994024747842]
We introduce a module called CompMod with Meta Comprehensive Regularization (MCR), embedded into existing self-supervised frameworks.
We update our proposed model through a bi-level optimization mechanism, enabling it to capture comprehensive features.
We provide theoretical support for our proposed method from information theory and causal counterfactual perspective.
arXiv Detail & Related papers (2024-03-03T15:53:48Z) - RefSAM: Efficiently Adapting Segmenting Anything Model for Referring Video Object Segmentation [53.4319652364256]
This paper presents the RefSAM model, which explores the potential of SAM for referring video object segmentation.
Our proposed approach adapts the original SAM model to enhance cross-modality learning by employing a lightweight Cross-RValModal.
We employ a parameter-efficient tuning strategy to align and fuse the language and vision features effectively.
arXiv Detail & Related papers (2023-07-03T13:21:58Z) - DCID: Deep Canonical Information Decomposition [84.59396326810085]
We consider the problem of identifying the signal shared between two one-dimensional target variables.
We propose ICM, an evaluation metric which can be used in the presence of ground-truth labels.
We also propose Deep Canonical Information Decomposition (DCID) - a simple, yet effective approach for learning the shared variables.
arXiv Detail & Related papers (2023-06-27T16:59:06Z) - Vector Quantized Wasserstein Auto-Encoder [57.29764749855623]
We study learning deep discrete representations from the generative viewpoint.
We endow discrete distributions over sequences of codewords and learn a deterministic decoder that transports the distribution over the sequences of codewords to the data distribution.
We develop further theories to connect it with the clustering viewpoint of WS distance, allowing us to have a better and more controllable clustering solution.
arXiv Detail & Related papers (2023-02-12T13:51:36Z) - Covariate-informed Representation Learning with Samplewise Optimal
Identifiable Variational Autoencoders [15.254297587065595]
Recently proposed identifiable variational autoencoder (iVAE) provides a promising approach for learning latent independent components of the data.
We develop a new approach, co-informed identifiable VAE (CI-iVAE)
In doing so, the objective function enforces the inverse relation, and learned representation contains more information of observations.
arXiv Detail & Related papers (2022-02-09T00:18:33Z) - InteL-VAEs: Adding Inductive Biases to Variational Auto-Encoders via
Intermediary Latents [60.785317191131284]
We introduce a simple and effective method for learning VAEs with controllable biases by using an intermediary set of latent variables.
In particular, it allows us to impose desired properties like sparsity or clustering on learned representations.
We show that this, in turn, allows InteL-VAEs to learn both better generative models and representations.
arXiv Detail & Related papers (2021-06-25T16:34:05Z) - Interpretable Multi-dataset Evaluation for Named Entity Recognition [110.64368106131062]
We present a general methodology for interpretable evaluation for the named entity recognition (NER) task.
The proposed evaluation method enables us to interpret the differences in models and datasets, as well as the interplay between them.
By making our analysis tool available, we make it easy for future researchers to run similar analyses and drive progress in this area.
arXiv Detail & Related papers (2020-11-13T10:53:27Z) - Simple and Effective VAE Training with Calibrated Decoders [123.08908889310258]
Variational autoencoders (VAEs) provide an effective and simple method for modeling complex distributions.
We study the impact of calibrated decoders, which learn the uncertainty of the decoding distribution.
We propose a simple but novel modification to the commonly used Gaussian decoder, which computes the prediction variance analytically.
arXiv Detail & Related papers (2020-06-23T17:57:47Z) - Variational Mutual Information Maximization Framework for VAE Latent
Codes with Continuous and Discrete Priors [5.317548969642376]
Variational Autoencoder (VAE) is a scalable method for learning directed latent variable models of complex data.
We propose Variational Mutual Information Maximization Framework for VAE to address this issue.
arXiv Detail & Related papers (2020-06-02T09:05:51Z) - Learning Discrete Structured Representations by Adversarially Maximizing
Mutual Information [39.87273353895564]
We propose learning discrete structured representations from unlabeled data by maximizing the mutual information between a structured latent variable and a target variable.
Our key technical contribution is an adversarial objective that can be used to tractably estimate mutual information assuming only the feasibility of cross entropy calculation.
We apply our model on document hashing and show that it outperforms current best baselines based on discrete and vector quantized variational autoencoders.
arXiv Detail & Related papers (2020-04-08T13:31:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.