All Roads Lead to Rome? Exploring Representational Similarities Between Latent Spaces of Generative Image Models
- URL: http://arxiv.org/abs/2407.13449v1
- Date: Thu, 18 Jul 2024 12:23:57 GMT
- Title: All Roads Lead to Rome? Exploring Representational Similarities Between Latent Spaces of Generative Image Models
- Authors: Charumathi Badrinath, Usha Bhalla, Alex Oesterling, Suraj Srinivas, Himabindu Lakkaraju,
- Abstract summary: We measure the latent space similarity of four generative image models: VAEs, GANs, Normalizing Flows (NFs) and Diffusion Models (DMs)
Our methodology involves training linear maps between frozen latent spaces to "stitch" arbitrary pairs of encoders and decoders.
Our main findings are that linear maps between latent spaces of performant models preserve most visual information even when latent sizes differ.
- Score: 22.364723506539974
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Do different generative image models secretly learn similar underlying representations? We investigate this by measuring the latent space similarity of four different models: VAEs, GANs, Normalizing Flows (NFs), and Diffusion Models (DMs). Our methodology involves training linear maps between frozen latent spaces to "stitch" arbitrary pairs of encoders and decoders and measuring output-based and probe-based metrics on the resulting "stitched'' models. Our main findings are that linear maps between latent spaces of performant models preserve most visual information even when latent sizes differ; for CelebA models, gender is the most similarly represented probe-able attribute. Finally we show on an NF that latent space representations converge early in training.
Related papers
- Stabilize the Latent Space for Image Autoregressive Modeling: A Unified Perspective [52.778766190479374]
Latent-based image generative models have achieved notable success in image generation tasks.
Despite sharing the same latent space, autoregressive models significantly lag behind LDMs and MIMs in image generation.
We propose a simple but effective discrete image tokenizer to stabilize the latent space for image generative modeling.
arXiv Detail & Related papers (2024-10-16T12:13:17Z) - Towards Model-Agnostic Dataset Condensation by Heterogeneous Models [13.170099297210372]
We develop a novel method to produce universally applicable condensed images through cross-model interactions.
By balancing the contribution of each model and maintaining their semantic meaning closely, our approach overcomes the limitations associated with model-specific condensed images.
arXiv Detail & Related papers (2024-09-22T17:13:07Z) - FreeSeg-Diff: Training-Free Open-Vocabulary Segmentation with Diffusion Models [56.71672127740099]
We focus on the task of image segmentation, which is traditionally solved by training models on closed-vocabulary datasets.
We leverage different and relatively small-sized, open-source foundation models for zero-shot open-vocabulary segmentation.
Our approach (dubbed FreeSeg-Diff), which does not rely on any training, outperforms many training-based approaches on both Pascal VOC and COCO datasets.
arXiv Detail & Related papers (2024-03-29T10:38:25Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Comparing the latent space of generative models [0.0]
Different encodings of datapoints in the latent space of latent-vector generative models may result in more or less effective and disentangled characterizations of the different explanatory factors of variation behind the data.
A simple linear mapping is enough to pass from a latent space to another while preserving most of the information.
arXiv Detail & Related papers (2022-07-14T10:39:02Z) - Linear Connectivity Reveals Generalization Strategies [54.947772002394736]
Some pairs of finetuned models have large barriers of increasing loss on the linear paths between them.
We find distinct clusters of models which are linearly connected on the test loss surface, but are disconnected from models outside the cluster.
Our work demonstrates how the geometry of the loss surface can guide models towards different functions.
arXiv Detail & Related papers (2022-05-24T23:43:02Z) - Manifold Topology Divergence: a Framework for Comparing Data Manifolds [109.0784952256104]
We develop a framework for comparing data manifold, aimed at the evaluation of deep generative models.
Based on the Cross-Barcode, we introduce the Manifold Topology Divergence score (MTop-Divergence)
We demonstrate that the MTop-Divergence accurately detects various degrees of mode-dropping, intra-mode collapse, mode invention, and image disturbance.
arXiv Detail & Related papers (2021-06-08T00:30:43Z) - Generative Models as Distributions of Functions [72.2682083758999]
Generative models are typically trained on grid-like data such as images.
In this paper, we abandon discretized grids and instead parameterize individual data points by continuous functions.
arXiv Detail & Related papers (2021-02-09T11:47:55Z) - Atlas Generative Models and Geodesic Interpolation [0.20305676256390928]
We define the general class of Atlas Generative Models (AGMs), models with hybrid discrete-continuous latent space.
We exemplify this by generalizing an algorithm for graph based geodesic to the setting of AGMs, and verify its performance experimentally.
arXiv Detail & Related papers (2021-01-30T16:35:25Z) - Isometric Gaussian Process Latent Variable Model for Dissimilarity Data [0.0]
We present a probabilistic model where the latent variable respects both the distances and the topology of the modeled data.
The model is inferred by variational inference based on observations of pairwise distances.
arXiv Detail & Related papers (2020-06-21T08:56:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.