Adversarial Manifold Matching via Deep Metric Learning for Generative
Modeling
- URL: http://arxiv.org/abs/2106.10777v1
- Date: Sun, 20 Jun 2021 23:25:01 GMT
- Title: Adversarial Manifold Matching via Deep Metric Learning for Generative
Modeling
- Authors: Mengyu Dai and Haibin Hang
- Abstract summary: We propose a manifold matching approach to generative models which includes a distribution generator and a metric generator.
The distribution generator aims at generating samples that follow some distribution condensed around the real data manifold.
The metric generator utilizes both real data and generated samples to learn a distance metric.
- Score: 5.5840609705075055
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a manifold matching approach to generative models which includes a
distribution generator (or data generator) and a metric generator. In our
framework, we view the real data set as some manifold embedded in a
high-dimensional Euclidean space. The distribution generator aims at generating
samples that follow some distribution condensed around the real data manifold.
It is achieved by matching two sets of points using their geometric shape
descriptors, such as centroid and $p$-diameter, with learned distance metric;
the metric generator utilizes both real data and generated samples to learn a
distance metric which is close to some intrinsic geodesic distance on the real
data manifold. The produced distance metric is further used for manifold
matching. The two networks are learned simultaneously during the training
process. We apply the approach on both unsupervised and supervised learning
tasks: in unconditional image generation task, the proposed method obtains
competitive results compared with existing generative models; in
super-resolution task, we incorporate the framework in perception-based models
and improve visual qualities by producing samples with more natural textures.
Both theoretical analysis and real data experiments guarantee the feasibility
and effectiveness of the proposed framework.
Related papers
- Understanding the Local Geometry of Generative Model Manifolds [14.191548577311904]
We study the relationship between the textitlocal geometry of the learned manifold and downstream generation.
We provide quantitative and qualitative evidence showing that for a given latent, the local descriptors are correlated with generation aesthetics, artifacts, uncertainty, and even memorization.
arXiv Detail & Related papers (2024-08-15T17:59:06Z) - (Deep) Generative Geodesics [57.635187092922976]
We introduce a newian metric to assess the similarity between any two data points.
Our metric leads to the conceptual definition of generative distances and generative geodesics.
Their approximations are proven to converge to their true values under mild conditions.
arXiv Detail & Related papers (2024-07-15T21:14:02Z) - Deep Generative Sampling in the Dual Divergence Space: A Data-efficient & Interpretative Approach for Generative AI [29.13807697733638]
We build on the remarkable achievements in generative sampling of natural images.
We propose an innovative challenge, potentially overly ambitious, which involves generating samples that resemble images.
The statistical challenge lies in the small sample size, sometimes consisting of a few hundred subjects.
arXiv Detail & Related papers (2024-04-10T22:35:06Z) - Dendrogram distance: an evaluation metric for generative networks using
hierarchical clustering [2.4283303315272713]
We present a novel metric for generative modeling evaluation, focusing primarily on generative networks.
The method uses dendrograms to represent real and fake data, allowing for the divergence between training and generated samples to be computed.
arXiv Detail & Related papers (2023-11-28T15:46:12Z) - RGM: A Robust Generalizable Matching Model [49.60975442871967]
We propose a deep model for sparse and dense matching, termed RGM (Robust Generalist Matching)
To narrow the gap between synthetic training samples and real-world scenarios, we build a new, large-scale dataset with sparse correspondence ground truth.
We are able to mix up various dense and sparse matching datasets, significantly improving the training diversity.
arXiv Detail & Related papers (2023-10-18T07:30:08Z) - Revisiting the Evaluation of Image Synthesis with GANs [55.72247435112475]
This study presents an empirical investigation into the evaluation of synthesis performance, with generative adversarial networks (GANs) as a representative of generative models.
In particular, we make in-depth analyses of various factors, including how to represent a data point in the representation space, how to calculate a fair distance using selected samples, and how many instances to use from each set.
arXiv Detail & Related papers (2023-04-04T17:54:32Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Semi-Supervised Manifold Learning with Complexity Decoupled Chart Autoencoders [45.29194877564103]
This work introduces a chart autoencoder with an asymmetric encoding-decoding process that can incorporate additional semi-supervised information such as class labels.
We discuss the approximation power of such networks and derive a bound that essentially depends on the intrinsic dimension of the data manifold rather than the dimension of ambient space.
arXiv Detail & Related papers (2022-08-22T19:58:03Z) - Hyperbolic Vision Transformers: Combining Improvements in Metric
Learning [116.13290702262248]
We propose a new hyperbolic-based model for metric learning.
At the core of our method is a vision transformer with output embeddings mapped to hyperbolic space.
We evaluate the proposed model with six different formulations on four datasets.
arXiv Detail & Related papers (2022-03-21T09:48:23Z) - GELATO: Geometrically Enriched Latent Model for Offline Reinforcement
Learning [54.291331971813364]
offline reinforcement learning approaches can be divided into proximal and uncertainty-aware methods.
In this work, we demonstrate the benefit of combining the two in a latent variational model.
Our proposed metrics measure both the quality of out of distribution samples as well as the discrepancy of examples in the data.
arXiv Detail & Related papers (2021-02-22T19:42:40Z) - Lessons Learned from the Training of GANs on Artificial Datasets [0.0]
Generative Adversarial Networks (GANs) have made great progress in synthesizing realistic images in recent years.
GANs are prone to underfitting or overfitting, making the analysis of them difficult and constrained.
We train them on artificial datasets where there are infinitely many samples and the real data distributions are simple.
We find that training mixtures of GANs leads to more performance gain compared to increasing the network depth or width.
arXiv Detail & Related papers (2020-07-13T14:51:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.