Spatially Multi-conditional Image Generation
- URL: http://arxiv.org/abs/2203.13812v1
- Date: Fri, 25 Mar 2022 17:57:13 GMT
- Title: Spatially Multi-conditional Image Generation
- Authors: Ritika Chakraborty, Nikola Popovic, Danda Pani Paudel, Thomas Probst,
Luc Van Gool
- Abstract summary: We propose a novel neural architecture to address the problem of multi-conditional image generation.
The proposed method uses a transformer-like architecture operating pixel-wise, which receives the available labels as input tokens.
Our experiments on three benchmark datasets demonstrate the clear superiority of our method over the state-of-the-art and the compared baselines.
- Score: 80.04130168156792
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In most scenarios, conditional image generation can be thought of as an
inversion of the image understanding process. Since generic image understanding
involves the solving of multiple tasks, it is natural to aim at the generation
of images via multi-conditioning. However, multi-conditional image generation
is a very challenging problem due to the heterogeneity and the sparsity of the
(in practice) available conditioning labels. In this work, we propose a novel
neural architecture to address the problem of heterogeneity and sparsity of the
spatially multi-conditional labels. Our choice of spatial conditioning, such as
by semantics and depth, is driven by the promise it holds for better control of
the image generation process. The proposed method uses a transformer-like
architecture operating pixel-wise, which receives the available labels as input
tokens to merge them in a learned homogeneous space of labels. The merged
labels are then used for image generation via conditional generative
adversarial training. In this process, the sparsity of the labels is handled by
simply dropping the input tokens corresponding to the missing labels at the
desired locations, thanks to the proposed pixel-wise operating architecture.
Our experiments on three benchmark datasets demonstrate the clear superiority
of our method over the state-of-the-art and the compared baselines.
Related papers
- Fusion is all you need: Face Fusion for Customized Identity-Preserving Image Synthesis [7.099258248662009]
Text-to-image (T2I) models have significantly advanced the development of artificial intelligence.
However, existing T2I-based methods often struggle to accurately reproduce the appearance of individuals from a reference image.
We leverage the pre-trained UNet from Stable Diffusion to incorporate the target face image directly into the generation process.
arXiv Detail & Related papers (2024-09-27T19:31:04Z) - Don't Look into the Dark: Latent Codes for Pluralistic Image Inpainting [8.572133295533643]
We present a method for large-mask pluralistic image inpainting based on the generative framework of discrete latent codes.
Our method learns latent priors, discretized as tokens, by only performing computations at the visible locations of the image.
arXiv Detail & Related papers (2024-03-27T01:28:36Z) - Unlocking Pre-trained Image Backbones for Semantic Image Synthesis [29.688029979801577]
We propose a new class of GAN discriminators for semantic image synthesis that generates highly realistic images.
Our model, which we dub DP-SIMS, achieves state-of-the-art results in terms of image quality and consistency with the input label maps on ADE-20K, COCO-Stuff, and Cityscapes.
arXiv Detail & Related papers (2023-12-20T09:39:19Z) - Wavelet-based Unsupervised Label-to-Image Translation [9.339522647331334]
We propose a new Unsupervised paradigm for SIS (USIS) that makes use of a self-supervised segmentation loss and whole image wavelet based discrimination.
We test our methodology on 3 challenging datasets and demonstrate its ability to bridge the performance gap between paired and unpaired models.
arXiv Detail & Related papers (2023-05-16T17:48:44Z) - Local and Global GANs with Semantic-Aware Upsampling for Image
Generation [201.39323496042527]
We consider generating images using local context.
We propose a class-specific generative network using semantic maps as guidance.
Lastly, we propose a novel semantic-aware upsampling method.
arXiv Detail & Related papers (2022-02-28T19:24:25Z) - Mixed Supervision Learning for Whole Slide Image Classification [88.31842052998319]
We propose a mixed supervision learning framework for super high-resolution images.
During the patch training stage, this framework can make use of coarse image-level labels to refine self-supervised learning.
A comprehensive strategy is proposed to suppress pixel-level false positives and false negatives.
arXiv Detail & Related papers (2021-07-02T09:46:06Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Reconstruction Regularized Deep Metric Learning for Multi-label Image
Classification [39.055689258395624]
We present a novel deep metric learning method to tackle the multi-label image classification problem.
Our model can be trained in an end-to-end manner.
arXiv Detail & Related papers (2020-07-27T13:28:50Z) - Diverse Image Generation via Self-Conditioned GANs [56.91974064348137]
We train a class-conditional GAN model without using manually annotated class labels.
Instead, our model is conditional on labels automatically derived from clustering in the discriminator's feature space.
Our clustering step automatically discovers diverse modes, and explicitly requires the generator to cover them.
arXiv Detail & Related papers (2020-06-18T17:56:03Z) - OneGAN: Simultaneous Unsupervised Learning of Conditional Image
Generation, Foreground Segmentation, and Fine-Grained Clustering [100.32273175423146]
We present a method for simultaneously learning, in an unsupervised manner, a conditional image generator, foreground extraction and segmentation, and object removal and background completion.
The method combines a Geneversarative Adrial Network and a Variational Auto-Encoder, with multiple encoders, generators and discriminators, and benefits from solving all tasks at once.
arXiv Detail & Related papers (2019-12-31T18:15:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.