Latent Space Conditioning on Generative Adversarial Networks
- URL: http://arxiv.org/abs/2012.08803v1
- Date: Wed, 16 Dec 2020 08:58:10 GMT
- Title: Latent Space Conditioning on Generative Adversarial Networks
- Authors: Ricard Durall, Kalun Ho, Franz-Josef Pfreundt and Janis Keuper
- Abstract summary: We introduce a novel framework that benefits from two popular learning techniques, adversarial training and representation learning.
In particular, our approach exploits the structure of a latent space (learned by the representation learning) and employs it to condition the generative model.
- Score: 3.823356975862006
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative adversarial networks are the state of the art approach towards
learned synthetic image generation. Although early successes were mostly
unsupervised, bit by bit, this trend has been superseded by approaches based on
labelled data. These supervised methods allow a much finer-grained control of
the output image, offering more flexibility and stability. Nevertheless, the
main drawback of such models is the necessity of annotated data. In this work,
we introduce an novel framework that benefits from two popular learning
techniques, adversarial training and representation learning, and takes a step
towards unsupervised conditional GANs. In particular, our approach exploits the
structure of a latent space (learned by the representation learning) and
employs it to condition the generative model. In this way, we break the
traditional dependency between condition and label, substituting the latter by
unsupervised features coming from the latent space. Finally, we show that this
new technique is able to produce samples on demand keeping the quality of its
supervised counterpart.
Related papers
- Efficient Visualization of Neural Networks with Generative Models and Adversarial Perturbations [0.0]
This paper presents a novel approach for deep visualization via a generative network, offering an improvement over existing methods.
Our model simplifies the architecture by reducing the number of networks used, requiring only a generator and a discriminator.
Our model requires less prior training knowledge and uses a non-adversarial training process, where the discriminator acts as a guide.
arXiv Detail & Related papers (2024-09-20T14:59:25Z) - ACTRESS: Active Retraining for Semi-supervised Visual Grounding [52.08834188447851]
A previous study, RefTeacher, makes the first attempt to tackle this task by adopting the teacher-student framework to provide pseudo confidence supervision and attention-based supervision.
This approach is incompatible with current state-of-the-art visual grounding models, which follow the Transformer-based pipeline.
Our paper proposes the ACTive REtraining approach for Semi-Supervised Visual Grounding, abbreviated as ACTRESS.
arXiv Detail & Related papers (2024-07-03T16:33:31Z) - Enforcing Conditional Independence for Fair Representation Learning and Causal Image Generation [13.841888171417017]
Conditional independence (CI) constraints are critical for defining and evaluating fairness in machine learning.
We introduce a new training paradigm that can be applied to any encoder architecture.
arXiv Detail & Related papers (2024-04-21T23:34:45Z) - Segue: Side-information Guided Generative Unlearnable Examples for
Facial Privacy Protection in Real World [64.4289385463226]
We propose Segue: Side-information guided generative unlearnable examples.
To improve transferability, we introduce side information such as true labels and pseudo labels.
It can resist JPEG compression, adversarial training, and some standard data augmentations.
arXiv Detail & Related papers (2023-10-24T06:22:37Z) - Exploring Compositional Visual Generation with Latent Classifier
Guidance [19.48538300223431]
We train latent diffusion models and auxiliary latent classifiers to facilitate non-linear navigation of latent representation generation.
We show that such conditional generation achieved by latent classifier guidance provably maximizes a lower bound of the conditional log probability during training.
We show that this paradigm based on latent classifier guidance is agnostic to pre-trained generative models, and present competitive results for both image generation and sequential manipulation of real and synthetic images.
arXiv Detail & Related papers (2023-04-25T03:02:58Z) - Toward Certified Robustness Against Real-World Distribution Shifts [65.66374339500025]
We train a generative model to learn perturbations from data and define specifications with respect to the output of the learned model.
A unique challenge arising from this setting is that existing verifiers cannot tightly approximate sigmoid activations.
We propose a general meta-algorithm for handling sigmoid activations which leverages classical notions of counter-example-guided abstraction refinement.
arXiv Detail & Related papers (2022-06-08T04:09:13Z) - A Unified Contrastive Energy-based Model for Understanding the
Generative Ability of Adversarial Training [64.71254710803368]
Adversarial Training (AT) is an effective approach to enhance the robustness of deep neural networks.
We demystify this phenomenon by developing a unified probabilistic framework, called Contrastive Energy-based Models (CEM)
We propose a principled method to develop adversarial learning and sampling methods.
arXiv Detail & Related papers (2022-03-25T05:33:34Z) - Generative Modeling Helps Weak Supervision (and Vice Versa) [87.62271390571837]
We propose a model fusing weak supervision and generative adversarial networks.
It captures discrete variables in the data alongside the weak supervision derived label estimate.
It is the first approach to enable data augmentation through weakly supervised synthetic images and pseudolabels.
arXiv Detail & Related papers (2022-03-22T20:24:21Z) - CoDeGAN: Contrastive Disentanglement for Generative Adversarial Network [0.5437298646956507]
Disentanglement, a critical concern in interpretable machine learning, has also garnered significant attention from the computer vision community.
We propose textttCoDeGAN, where we relax similarity constraints for disentanglement from the image domain to the feature domain.
We integrate self-supervised pre-training into CoDeGAN to learn semantic representations, significantly facilitating unsupervised disentanglement.
arXiv Detail & Related papers (2021-03-05T12:44:22Z) - Adversarially-Trained Deep Nets Transfer Better: Illustration on Image
Classification [53.735029033681435]
Transfer learning is a powerful methodology for adapting pre-trained deep neural networks on image recognition tasks to new domains.
In this work, we demonstrate that adversarially-trained models transfer better than non-adversarially-trained models.
arXiv Detail & Related papers (2020-07-11T22:48:42Z) - Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic
Question Generation [10.324402925019946]
A major obstacle to the wide-spread adoption of neural retrieval models is that they require large supervised training sets to surpass traditional term-based techniques.
In this paper, we propose an approach to zero-shot learning for passage retrieval that uses synthetic question generation to close this gap.
arXiv Detail & Related papers (2020-04-29T22:21:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.