HGAN: Hybrid Generative Adversarial Network
- URL: http://arxiv.org/abs/2102.03710v1
- Date: Sun, 7 Feb 2021 03:54:12 GMT
- Title: HGAN: Hybrid Generative Adversarial Network
- Authors: Seyed Mehdi Iranmanesh and Nasser M. Nasrabadi
- Abstract summary: We propose a hybrid generative adversarial network (HGAN) for which we can enforce data density estimation via an autoregressive model.
A novel deep architecture within the GAN formulation is developed to adversarially distill the autoregressive model information in addition to simple GAN training approach.
- Score: 25.940501417539416
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present a simple approach to train Generative Adversarial
Networks (GANs) in order to avoid a \textit {mode collapse} issue. Implicit
models such as GANs tend to generate better samples compared to explicit models
that are trained on tractable data likelihood. However, GANs overlook the
explicit data density characteristics which leads to undesirable quantitative
evaluations and mode collapse. To bridge this gap, we propose a hybrid
generative adversarial network (HGAN) for which we can enforce data density
estimation via an autoregressive model and support both adversarial and
likelihood framework in a joint training manner which diversify the estimated
density in order to cover different modes. We propose to use an adversarial
network to \textit {transfer knowledge} from an autoregressive model (teacher)
to the generator (student) of a GAN model. A novel deep architecture within the
GAN formulation is developed to adversarially distill the autoregressive model
information in addition to simple GAN training approach. We conduct extensive
experiments on real-world datasets (i.e., MNIST, CIFAR-10, STL-10) to
demonstrate the effectiveness of the proposed HGAN under qualitative and
quantitative evaluations. The experimental results show the superiority and
competitiveness of our method compared to the baselines.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.