MMCGAN: Generative Adversarial Network with Explicit Manifold Prior
- URL: http://arxiv.org/abs/2006.10331v1
- Date: Thu, 18 Jun 2020 07:38:54 GMT
- Title: MMCGAN: Generative Adversarial Network with Explicit Manifold Prior
- Authors: Guanhua Zheng, Jitao Sang, Changsheng Xu
- Abstract summary: We propose to employ explicit manifold learning as prior to alleviate mode collapse and stabilize training of GAN.
Our experiments on both the toy data and real datasets show the effectiveness of MMCGAN in alleviating mode collapse, stabilizing training, and improving the quality of generated samples.
- Score: 78.58159882218378
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative Adversarial Network(GAN) provides a good generative framework to
produce realistic samples, but suffers from two recognized issues as mode
collapse and unstable training. In this work, we propose to employ explicit
manifold learning as prior to alleviate mode collapse and stabilize training of
GAN. Since the basic assumption of conventional manifold learning fails in case
of sparse and uneven data distribution, we introduce a new target, Minimum
Manifold Coding (MMC), for manifold learning to encourage simple and unfolded
manifold. In essence, MMC is the general case of the shortest Hamiltonian Path
problem and pursues manifold with minimum Riemann volume. Using the
standardized code from MMC as prior, GAN is guaranteed to recover a simple and
unfolded manifold covering all the training data. Our experiments on both the
toy data and real datasets show the effectiveness of MMCGAN in alleviating mode
collapse, stabilizing training, and improving the quality of generated samples.
Related papers
- AdaMerging: Adaptive Model Merging for Multi-Task Learning [68.75885518081357]
This paper introduces an innovative technique called Adaptive Model Merging (AdaMerging)
It aims to autonomously learn the coefficients for model merging, either in a task-wise or layer-wise manner, without relying on the original training data.
Compared to the current state-of-the-art task arithmetic merging scheme, AdaMerging showcases a remarkable 11% improvement in performance.
arXiv Detail & Related papers (2023-10-04T04:26:33Z) - Variational Density Propagation Continual Learning [0.0]
Deep Neural Networks (DNNs) deployed to the real world are regularly subject to out-of-distribution (OoD) data.
This paper proposes a framework for adapting to data distribution drift modeled by benchmark Continual Learning datasets.
arXiv Detail & Related papers (2023-08-22T21:51:39Z) - Revisiting the Robustness of the Minimum Error Entropy Criterion: A
Transfer Learning Case Study [16.07380451502911]
This paper revisits the robustness of the minimum error entropy criterion to deal with non-Gaussian noises.
We investigate its feasibility and usefulness in real-life transfer learning regression tasks, where distributional shifts are common.
arXiv Detail & Related papers (2023-07-17T15:38:11Z) - Diff-Instruct: A Universal Approach for Transferring Knowledge From
Pre-trained Diffusion Models [77.83923746319498]
We propose a framework called Diff-Instruct to instruct the training of arbitrary generative models.
We show that Diff-Instruct results in state-of-the-art single-step diffusion-based models.
Experiments on refining GAN models show that the Diff-Instruct can consistently improve the pre-trained generators of GAN models.
arXiv Detail & Related papers (2023-05-29T04:22:57Z) - Guided Deep Metric Learning [0.9786690381850356]
We propose a novel approach to DML that we call Guided Deep Metric Learning.
The proposed method is capable of a better manifold generalization and representation to up to 40% improvement.
arXiv Detail & Related papers (2022-06-04T17:34:11Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - A new perspective on probabilistic image modeling [92.89846887298852]
We present a new probabilistic approach for image modeling capable of density estimation, sampling and tractable inference.
DCGMMs can be trained end-to-end by SGD from random initial conditions, much like CNNs.
We show that DCGMMs compare favorably to several recent PC and SPN models in terms of inference, classification and sampling.
arXiv Detail & Related papers (2022-03-21T14:53:57Z) - Continual Learning with Fully Probabilistic Models [70.3497683558609]
We present an approach for continual learning based on fully probabilistic (or generative) models of machine learning.
We propose a pseudo-rehearsal approach using a Gaussian Mixture Model (GMM) instance for both generator and classifier functionalities.
We show that GMR achieves state-of-the-art performance on common class-incremental learning problems at very competitive time and memory complexity.
arXiv Detail & Related papers (2021-04-19T12:26:26Z) - Evolving parametrized Loss for Image Classification Learning on Small
Datasets [1.4685355149711303]
This paper proposes a meta-learning approach to evolving a parametrized loss function, which is called Meta-Loss Network (MLN)
In our approach, the MLN is embedded in the framework of classification learning as a differentiable objective function.
Experiment results demonstrate that the MLN effectively improved generalization compared to classical cross-entropy error and mean squared error.
arXiv Detail & Related papers (2021-03-15T10:00:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.