Learning Disentangled Latent Factors from Paired Data in Cross-Modal
Retrieval: An Implicit Identifiable VAE Approach
- URL: http://arxiv.org/abs/2012.00682v1
- Date: Tue, 1 Dec 2020 17:47:50 GMT
- Title: Learning Disentangled Latent Factors from Paired Data in Cross-Modal
Retrieval: An Implicit Identifiable VAE Approach
- Authors: Minyoung Kim, Ricardo Guerrero, Vladimir Pavlovic
- Abstract summary: We deal with the problem of learning the underlying disentangled latent factors that are shared between the paired bi-modal data in cross-modal retrieval.
We propose a novel idea of the implicit decoder, which completely removes the ambient data decoding module from a latent variable model.
Our model is shown to identify the factors accurately, significantly outperforming conventional encoder-decoder latent variable models.
- Score: 33.61751393224223
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We deal with the problem of learning the underlying disentangled latent
factors that are shared between the paired bi-modal data in cross-modal
retrieval. Our assumption is that the data in both modalities are complex,
structured, and high dimensional (e.g., image and text), for which the
conventional deep auto-encoding latent variable models such as the Variational
Autoencoder (VAE) often suffer from difficulty of accurate decoder training or
realistic synthesis. A suboptimally trained decoder can potentially harm the
model's capability of identifying the true factors. In this paper we propose a
novel idea of the implicit decoder, which completely removes the ambient data
decoding module from a latent variable model, via implicit encoder inversion
that is achieved by Jacobian regularization of the low-dimensional embedding
function. Motivated from the recent Identifiable VAE (IVAE) model, we modify it
to incorporate the query modality data as conditioning auxiliary input, which
allows us to prove that the true parameters of the model can be identified
under some regularity conditions. Tested on various datasets where the true
factors are fully/partially available, our model is shown to identify the
factors accurately, significantly outperforming conventional encoder-decoder
latent variable models. We also test our model on the Recipe1M, the large-scale
food image/recipe dataset, where the learned factors by our approach highly
coincide with the most pronounced food factors that are widely agreed on,
including savoriness, wateriness, and greenness.
Related papers
- Latent variable model for high-dimensional point process with structured missingness [4.451479907610764]
Longitudinal data are important in numerous fields, such as healthcare, sociology and seismology.
Real-world datasets can be high-dimensional, contain structured missingness patterns, and measurement time points can be governed by an unknown process.
We propose a flexible and efficient latent-variable model that is capable of addressing all these limitations.
arXiv Detail & Related papers (2024-02-08T15:41:48Z) - Predictive variational autoencoder for learning robust representations
of time-series data [0.0]
We propose a VAE architecture that predicts the next point in time and show that it mitigates the learning of spurious features.
We show that together these two constraints on VAEs to be smooth over time produce robust latent representations and faithfully recover latent factors on synthetic datasets.
arXiv Detail & Related papers (2023-12-12T02:06:50Z) - Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - Informative Data Selection with Uncertainty for Multi-modal Object
Detection [25.602915381482468]
We propose a universal uncertainty-aware multi-modal fusion model.
Our model reduces the randomness in fusion and generates reliable output.
Our fusion model is proven to resist severe noise interference like Gaussian, motion blur, and frost, with only slight degradation.
arXiv Detail & Related papers (2023-04-23T16:36:13Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - Posterior Collapse and Latent Variable Non-identifiability [54.842098835445]
We propose a class of latent-identifiable variational autoencoders, deep generative models which enforce identifiability without sacrificing flexibility.
Across synthetic and real datasets, latent-identifiable variational autoencoders outperform existing methods in mitigating posterior collapse and providing meaningful representations of the data.
arXiv Detail & Related papers (2023-01-02T06:16:56Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - Entropy optimized semi-supervised decomposed vector-quantized
variational autoencoder model based on transfer learning for multiclass text
classification and generation [3.9318191265352196]
We propose a semisupervised discrete latent variable model for multi-class text classification and text generation.
The proposed model employs the concept of transfer learning for training a quantized transformer model.
Experimental results indicate that the proposed model has surpassed the state-of-the-art models remarkably.
arXiv Detail & Related papers (2021-11-10T07:07:54Z) - Contrastive Model Inversion for Data-Free Knowledge Distillation [60.08025054715192]
We propose Contrastive Model Inversion, where the data diversity is explicitly modeled as an optimizable objective.
Our main observation is that, under the constraint of the same amount of data, higher data diversity usually indicates stronger instance discrimination.
Experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet demonstrate that CMI achieves significantly superior performance when the generated data are used for knowledge distillation.
arXiv Detail & Related papers (2021-05-18T15:13:00Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.