Comparing the latent space of generative models
- URL: http://arxiv.org/abs/2207.06812v1
- Date: Thu, 14 Jul 2022 10:39:02 GMT
- Title: Comparing the latent space of generative models
- Authors: Andrea Asperti and Valerio Tonelli
- Abstract summary: Different encodings of datapoints in the latent space of latent-vector generative models may result in more or less effective and disentangled characterizations of the different explanatory factors of variation behind the data.
A simple linear mapping is enough to pass from a latent space to another while preserving most of the information.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Different encodings of datapoints in the latent space of latent-vector
generative models may result in more or less effective and disentangled
characterizations of the different explanatory factors of variation behind the
data. Many works have been recently devoted to the explorationof the latent
space of specific models, mostly focused on the study of how features are
disentangled and of how trajectories producing desired alterations of data in
the visible space can be found. In this work we address the more general
problem of comparing the latent spaces of different models, looking for
transformations between them. We confined the investigation to the familiar and
largely investigated case of generative models for the data manifold of human
faces. The surprising, preliminary result reported in this article is that
(provided models have not been taught or explicitly conceived to act
differently) a simple linear mapping is enough to pass from a latent space to
another while preserving most of the information.
Related papers
- All Roads Lead to Rome? Exploring Representational Similarities Between Latent Spaces of Generative Image Models [22.364723506539974]
We measure the latent space similarity of four generative image models: VAEs, GANs, Normalizing Flows (NFs) and Diffusion Models (DMs)
Our methodology involves training linear maps between frozen latent spaces to "stitch" arbitrary pairs of encoders and decoders.
Our main findings are that linear maps between latent spaces of performant models preserve most visual information even when latent sizes differ.
arXiv Detail & Related papers (2024-07-18T12:23:57Z) - Learning Divergence Fields for Shift-Robust Graph Representations [73.11818515795761]
In this work, we propose a geometric diffusion model with learnable divergence fields for the challenging problem with interdependent data.
We derive a new learning objective through causal inference, which can guide the model to learn generalizable patterns of interdependence that are insensitive across domains.
arXiv Detail & Related papers (2024-06-07T14:29:21Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Towards a mathematical understanding of learning from few examples with
nonlinear feature maps [68.8204255655161]
We consider the problem of data classification where the training set consists of just a few data points.
We reveal key relationships between the geometry of an AI model's feature space, the structure of the underlying data distributions, and the model's generalisation capabilities.
arXiv Detail & Related papers (2022-11-07T14:52:58Z) - Dynamic Latent Separation for Deep Learning [67.62190501599176]
A core problem in machine learning is to learn expressive latent variables for model prediction on complex data.
Here, we develop an approach that improves expressiveness, provides partial interpretation, and is not restricted to specific applications.
arXiv Detail & Related papers (2022-10-07T17:56:53Z) - Attribute Graphs Underlying Molecular Generative Models: Path to Learning with Limited Data [42.517927809224275]
We provide an algorithm that relies on perturbation experiments on latent codes of a pre-trained generative autoencoder to uncover an attribute graph.
We show that one can fit an effective graphical model that models a structural equation model between latent codes.
Using a pre-trained generative autoencoder trained on a large dataset of small molecules, we demonstrate that the graphical model can be used to predict a specific property.
arXiv Detail & Related papers (2022-07-14T19:20:30Z) - De-Biasing Generative Models using Counterfactual Methods [0.0]
We propose a new decoder based framework named the Causal Counterfactual Generative Model (CCGM)
Our proposed method combines a causal latent space VAE model with specific modification to emphasize causal fidelity.
We explore how better disentanglement of causal learning and encoding/decoding generates higher causal intervention quality.
arXiv Detail & Related papers (2022-07-04T16:53:20Z) - Learning from few examples with nonlinear feature maps [68.8204255655161]
We explore the phenomenon and reveal key relationships between dimensionality of AI model's feature space, non-degeneracy of data distributions, and the model's generalisation capabilities.
The main thrust of our present analysis is on the influence of nonlinear feature transformations mapping original data into higher- and possibly infinite-dimensional spaces on the resulting model's generalisation capabilities.
arXiv Detail & Related papers (2022-03-31T10:36:50Z) - Smoothing the Generative Latent Space with Mixup-based Distance Learning [32.838539968751924]
We consider the situation where neither large scale dataset of our interest nor transferable source dataset is available.
We propose latent mixup-based distance regularization on the feature space of both a generator and the counterpart discriminator.
arXiv Detail & Related papers (2021-11-23T06:39:50Z) - Expressivity of Parameterized and Data-driven Representations in Quality
Diversity Search [111.06379262544911]
We compare the output diversity of a quality diversity evolutionary search performed in two different search spaces.
A learned model is better at interpolating between known data points than at extrapolating or expanding towards unseen examples.
arXiv Detail & Related papers (2021-05-10T10:27:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.