Improving Generative Adversarial Networks with Local Coordinate Coding
- URL: http://arxiv.org/abs/2008.00942v1
- Date: Tue, 28 Jul 2020 09:17:50 GMT
- Title: Improving Generative Adversarial Networks with Local Coordinate Coding
- Authors: Jiezhang Cao, Yong Guo, Qingyao Wu, Chunhua Shen, Junzhou Huang,
Mingkui Tan
- Abstract summary: Generative adversarial networks (GANs) have shown remarkable success in generating realistic data from some predefined prior distribution.
In practice, semantic information might be represented by some latent distribution learned from data.
We propose an LCCGAN model with local coordinate coding (LCC) to improve the performance of generating data.
- Score: 150.24880482480455
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative adversarial networks (GANs) have shown remarkable success in
generating realistic data from some predefined prior distribution (e.g.,
Gaussian noises). However, such prior distribution is often independent of real
data and thus may lose semantic information (e.g., geometric structure or
content in images) of data. In practice, the semantic information might be
represented by some latent distribution learned from data. However, such latent
distribution may incur difficulties in data sampling for GANs. In this paper,
rather than sampling from the predefined prior distribution, we propose an
LCCGAN model with local coordinate coding (LCC) to improve the performance of
generating data. First, we propose an LCC sampling method in LCCGAN to sample
meaningful points from the latent manifold. With the LCC sampling method, we
can exploit the local information on the latent manifold and thus produce new
data with promising quality. Second, we propose an improved version, namely
LCCGAN++, by introducing a higher-order term in the generator approximation.
This term is able to achieve better approximation and thus further improve the
performance. More critically, we derive the generalization bound for both
LCCGAN and LCCGAN++ and prove that a low-dimensional input is sufficient to
achieve good generalization performance. Extensive experiments on four
benchmark datasets demonstrate the superiority of the proposed method over
existing GANs.
Related papers
- SMaRt: Improving GANs with Score Matching Regularity [94.81046452865583]
Generative adversarial networks (GANs) usually struggle in learning from highly diverse data, whose underlying manifold is complex.
We show that score matching serves as a promising solution to this issue thanks to its capability of persistently pushing the generated data points towards the real data manifold.
We propose to improve the optimization of GANs with score matching regularity (SMaRt)
arXiv Detail & Related papers (2023-11-30T03:05:14Z) - Improved Distribution Matching for Dataset Condensation [91.55972945798531]
We propose a novel dataset condensation method based on distribution matching.
Our simple yet effective method outperforms most previous optimization-oriented methods with much fewer computational resources.
arXiv Detail & Related papers (2023-07-19T04:07:33Z) - LD-GAN: Low-Dimensional Generative Adversarial Network for Spectral
Image Generation with Variance Regularization [72.4394510913927]
Deep learning methods are state-of-the-art for spectral image (SI) computational tasks.
GANs enable diverse augmentation by learning and sampling from the data distribution.
GAN-based SI generation is challenging since the high-dimensionality nature of this kind of data hinders the convergence of the GAN training yielding to suboptimal generation.
We propose a statistical regularization to control the low-dimensional representation variance for the autoencoder training and to achieve high diversity of samples generated with the GAN.
arXiv Detail & Related papers (2023-04-29T00:25:02Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Invariance Learning in Deep Neural Networks with Differentiable Laplace
Approximations [76.82124752950148]
We develop a convenient gradient-based method for selecting the data augmentation.
We use a differentiable Kronecker-factored Laplace approximation to the marginal likelihood as our objective.
arXiv Detail & Related papers (2022-02-22T02:51:11Z) - Top-Down Deep Clustering with Multi-generator GANs [0.0]
Deep clustering (DC) learns embedding spaces that are optimal for cluster analysis.
We propose HC-MGAN, a new technique based on GANs with multiple generators (MGANs)
Our method is inspired by the observation that each generator of a MGAN tends to generate data that correlates with a sub-region of the real data distribution.
arXiv Detail & Related papers (2021-12-06T22:53:12Z) - Improving Model Compatibility of Generative Adversarial Networks by
Boundary Calibration [24.28407308818025]
Boundary-Calibration GANs (BCGANs) are proposed to improve GAN's model compatibility.
BCGANs generate realistic images like original GANs but also achieves superior model compatibility than the original GANs.
arXiv Detail & Related papers (2021-11-03T16:08:09Z) - PriorGAN: Real Data Prior for Generative Adversarial Nets [36.01759301994946]
We propose a novel prior that captures the whole real data distribution for GANs, which are called PriorGANs.
Our experiments demonstrate that PriorGANs outperform the state-of-the-art on the CIFAR-10, FFHQ, LSUN-cat, and LSUN-bird datasets by large margins.
arXiv Detail & Related papers (2020-06-30T17:51:47Z) - Towards GANs' Approximation Ability [8.471366736328811]
This paper will first theoretically analyze GANs' approximation property.
We prove that the generator with the input latent variable in GANs can universally approximate the potential data distribution.
In the practical dataset, four GANs using SDG can also outperform the corresponding traditional GANs when the model architectures are smaller.
arXiv Detail & Related papers (2020-04-10T02:40:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.