HGAN: Hybrid Generative Adversarial Network
- URL: http://arxiv.org/abs/2102.03710v1
- Date: Sun, 7 Feb 2021 03:54:12 GMT
- Title: HGAN: Hybrid Generative Adversarial Network
- Authors: Seyed Mehdi Iranmanesh and Nasser M. Nasrabadi
- Abstract summary: We propose a hybrid generative adversarial network (HGAN) for which we can enforce data density estimation via an autoregressive model.
A novel deep architecture within the GAN formulation is developed to adversarially distill the autoregressive model information in addition to simple GAN training approach.
- Score: 25.940501417539416
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present a simple approach to train Generative Adversarial
Networks (GANs) in order to avoid a \textit {mode collapse} issue. Implicit
models such as GANs tend to generate better samples compared to explicit models
that are trained on tractable data likelihood. However, GANs overlook the
explicit data density characteristics which leads to undesirable quantitative
evaluations and mode collapse. To bridge this gap, we propose a hybrid
generative adversarial network (HGAN) for which we can enforce data density
estimation via an autoregressive model and support both adversarial and
likelihood framework in a joint training manner which diversify the estimated
density in order to cover different modes. We propose to use an adversarial
network to \textit {transfer knowledge} from an autoregressive model (teacher)
to the generator (student) of a GAN model. A novel deep architecture within the
GAN formulation is developed to adversarially distill the autoregressive model
information in addition to simple GAN training approach. We conduct extensive
experiments on real-world datasets (i.e., MNIST, CIFAR-10, STL-10) to
demonstrate the effectiveness of the proposed HGAN under qualitative and
quantitative evaluations. The experimental results show the superiority and
competitiveness of our method compared to the baselines.
Related papers
- Parallelly Tempered Generative Adversarial Networks [7.94957965474334]
A generative adversarial network (GAN) has been a representative backbone model in generative artificial intelligence (AI)
This work analyzes the training instability and inefficiency in the presence of mode collapse by linking it to multimodality in the target distribution.
With our newly developed GAN objective function, the generator can learn all the tempered distributions simultaneously.
arXiv Detail & Related papers (2024-11-18T18:01:13Z) - Adversarial Robustification via Text-to-Image Diffusion Models [56.37291240867549]
Adrial robustness has been conventionally believed as a challenging property to encode for neural networks.
We develop a scalable and model-agnostic solution to achieve adversarial robustness without using any data.
arXiv Detail & Related papers (2024-07-26T10:49:14Z) - Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution [67.9215891673174]
We propose score entropy as a novel loss that naturally extends score matching to discrete spaces.
We test our Score Entropy Discrete Diffusion models on standard language modeling tasks.
arXiv Detail & Related papers (2023-10-25T17:59:12Z) - Phasic Content Fusing Diffusion Model with Directional Distribution
Consistency for Few-Shot Model Adaption [73.98706049140098]
We propose a novel phasic content fusing few-shot diffusion model with directional distribution consistency loss.
Specifically, we design a phasic training strategy with phasic content fusion to help our model learn content and style information when t is large.
Finally, we propose a cross-domain structure guidance strategy that enhances structure consistency during domain adaptation.
arXiv Detail & Related papers (2023-09-07T14:14:11Z) - Phoenix: A Federated Generative Diffusion Model [6.09170287691728]
Training generative models on large centralized datasets can pose challenges in terms of data privacy, security, and accessibility.
This paper proposes a novel method for training a Denoising Diffusion Probabilistic Model (DDPM) across multiple data sources using Federated Learning (FL) techniques.
arXiv Detail & Related papers (2023-06-07T01:43:09Z) - Less is More: Mitigate Spurious Correlations for Open-Domain Dialogue
Response Generation Models by Causal Discovery [52.95935278819512]
We conduct the first study on spurious correlations for open-domain response generation models based on a corpus CGDIALOG curated in our work.
Inspired by causal discovery algorithms, we propose a novel model-agnostic method for training and inference of response generation model.
arXiv Detail & Related papers (2023-03-02T06:33:48Z) - Tailoring Language Generation Models under Total Variation Distance [55.89964205594829]
The standard paradigm of neural language generation adopts maximum likelihood estimation (MLE) as the optimizing method.
We develop practical bounds to apply it to language generation.
We introduce the TaiLr objective that balances the tradeoff of estimating TVD.
arXiv Detail & Related papers (2023-02-26T16:32:52Z) - Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited
Data [125.7135706352493]
Generative adversarial networks (GANs) typically require ample data for training in order to synthesize high-fidelity images.
Recent studies have shown that training GANs with limited data remains formidable due to discriminator overfitting.
This paper introduces a novel strategy called Adaptive Pseudo Augmentation (APA) to encourage healthy competition between the generator and the discriminator.
arXiv Detail & Related papers (2021-11-12T18:13:45Z) - Improving Model Compatibility of Generative Adversarial Networks by
Boundary Calibration [24.28407308818025]
Boundary-Calibration GANs (BCGANs) are proposed to improve GAN's model compatibility.
BCGANs generate realistic images like original GANs but also achieves superior model compatibility than the original GANs.
arXiv Detail & Related papers (2021-11-03T16:08:09Z) - Teaching a GAN What Not to Learn [20.03447539784024]
Generative adversarial networks (GANs) were originally envisioned as unsupervised generative models that learn to follow a target distribution.
In this paper, we approach the supervised GAN problem from a different perspective, one motivated by the philosophy of the famous Persian poet Rumi.
In the GAN framework, we not only provide the GAN positive data that it must learn to model, but also present it with so-called negative samples that it must learn to avoid.
This formulation allows the discriminator to represent the underlying target distribution better by learning to penalize generated samples that are undesirable.
arXiv Detail & Related papers (2020-10-29T14:44:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.