Information Compensation for Deep Conditional Generative Networks
- URL: http://arxiv.org/abs/2001.08559v3
- Date: Sun, 6 Mar 2022 07:28:58 GMT
- Title: Information Compensation for Deep Conditional Generative Networks
- Authors: Zehao Wang, Kaili Wang, Tinne Tuytelaars, Jose Oramas
- Abstract summary: We propose a novel structure for unsupervised conditional GANs powered by a novel Information Compensation Connection (IC-Connection)
The proposed IC-Connection enables GANs to compensate for information loss incurred during deconvolution operations.
Our empirical results suggest that our method achieves better disentanglement compared to the state-of-the-art GANs in a conditional generation setting.
- Score: 38.054911004694624
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, unsupervised/weakly-supervised conditional generative
adversarial networks (GANs) have achieved many successes on the task of
modeling and generating data. However, one of their weaknesses lies in their
poor ability to separate, or disentangle, the different factors that
characterize the representation encoded in their latent space. To address this
issue, we propose a novel structure for unsupervised conditional GANs powered
by a novel Information Compensation Connection (IC-Connection). The proposed
IC-Connection enables GANs to compensate for information loss incurred during
deconvolution operations. In addition, to quantify the degree of
disentanglement on both discrete and continuous latent variables, we design a
novel evaluation procedure. Our empirical results suggest that our method
achieves better disentanglement compared to the state-of-the-art GANs in a
conditional generation setting.
Related papers
- Conditional Idempotent Generative Networks [0.0]
Conditional Idempotent Generative Networks (CIGN) is a novel approach that expands upon Idempotent Generative Networks (IGN) to enable conditional generation.
CIGNs address this limitation by incorporating conditioning mechanisms, allowing users to steer the generation process towards specific types of data.
arXiv Detail & Related papers (2024-06-05T01:31:50Z) - MTS-DVGAN: Anomaly Detection in Cyber-Physical Systems using a Dual
Variational Generative Adversarial Network [7.889342625283858]
Deep generative models are promising in detecting novel cyber-physical attacks, mitigating the vulnerability of Cyber-physical systems (CPSs) without relying on labeled information.
This article proposes a novel unsupervised dual variational generative adversarial model named MST-DVGAN.
The central concept is to enhance the model's discriminative capability by widening the distinction between reconstructed abnormal samples and their normal counterparts.
arXiv Detail & Related papers (2023-11-04T11:19:03Z) - Conditional Denoising Diffusion for Sequential Recommendation [62.127862728308045]
Two prominent generative models, Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs)
GANs suffer from unstable optimization, while VAEs are prone to posterior collapse and over-smoothed generations.
We present a conditional denoising diffusion model, which includes a sequence encoder, a cross-attentive denoising decoder, and a step-wise diffuser.
arXiv Detail & Related papers (2023-04-22T15:32:59Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - NOTMAD: Estimating Bayesian Networks with Sample-Specific Structures and
Parameters [70.55488722439239]
We present NOTMAD, which learns to mix archetypal networks according to sample context.
We demonstrate the utility of NOTMAD and sample-specific network inference through analysis and experiments, including patient-specific gene expression networks.
arXiv Detail & Related papers (2021-11-01T17:17:34Z) - Inferential Wasserstein Generative Adversarial Networks [9.859829604054127]
We introduce a novel inferential Wasserstein GAN (iWGAN) model, which is a principled framework to fuse auto-encoders and WGANs.
The iWGAN greatly mitigates the symptom of mode collapse, speeds up the convergence, and is able to provide a measurement of quality check for each individual sample.
arXiv Detail & Related papers (2021-09-13T00:43:21Z) - Are conditional GANs explicitly conditional? [0.0]
This paper proposes two contributions for conditional Generative Adversarial Networks (cGANs)
The first main contribution is an analysis of cGANs to show that they are not explicitly conditional.
The second contribution is a new method, called acontrario, that explicitly models conditionality for both parts of the adversarial architecture.
arXiv Detail & Related papers (2021-06-28T22:49:27Z) - HGAN: Hybrid Generative Adversarial Network [25.940501417539416]
We propose a hybrid generative adversarial network (HGAN) for which we can enforce data density estimation via an autoregressive model.
A novel deep architecture within the GAN formulation is developed to adversarially distill the autoregressive model information in addition to simple GAN training approach.
arXiv Detail & Related papers (2021-02-07T03:54:12Z) - Estimating the Effects of Continuous-valued Interventions using
Generative Adversarial Networks [103.14809802212535]
We build on the generative adversarial networks (GANs) framework to address the problem of estimating the effect of continuous-valued interventions.
Our model, SCIGAN, is flexible and capable of simultaneously estimating counterfactual outcomes for several different continuous interventions.
To address the challenges presented by shifting to continuous interventions, we propose a novel architecture for our discriminator.
arXiv Detail & Related papers (2020-02-27T18:46:21Z) - When Relation Networks meet GANs: Relation GANs with Triplet Loss [110.7572918636599]
Training stability is still a lingering concern of generative adversarial networks (GANs)
In this paper, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability.
Experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks.
arXiv Detail & Related papers (2020-02-24T11:35:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.