Generating Annotated High-Fidelity Images Containing Multiple Coherent
Objects
- URL: http://arxiv.org/abs/2006.12150v3
- Date: Thu, 15 Jul 2021 21:42:29 GMT
- Title: Generating Annotated High-Fidelity Images Containing Multiple Coherent
Objects
- Authors: Bryan G. Cardenas, Devanshu Arya, Deepak K. Gupta
- Abstract summary: We propose a multi-object generation framework that can synthesize images with multiple objects without explicitly requiring contextual information.
We demonstrate how coherency and fidelity are preserved with our method through experiments on the Multi-MNIST and CLEVR datasets.
- Score: 10.783993190686132
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent developments related to generative models have made it possible to
generate diverse high-fidelity images. In particular, layout-to-image
generation models have gained significant attention due to their capability to
generate realistic complex images containing distinct objects. These models are
generally conditioned on either semantic layouts or textual descriptions.
However, unlike natural images, providing auxiliary information can be
extremely hard in domains such as biomedical imaging and remote sensing. In
this work, we propose a multi-object generation framework that can synthesize
images with multiple objects without explicitly requiring their contextual
information during the generation process. Based on a vector-quantized
variational autoencoder (VQ-VAE) backbone, our model learns to preserve spatial
coherency within an image as well as semantic coherency between the objects and
the background through two powerful autoregressive priors: PixelSNAIL and
LayoutPixelSNAIL. While the PixelSNAIL learns the distribution of the latent
encodings of the VQ-VAE, the LayoutPixelSNAIL is used to specifically learn the
semantic distribution of the objects. An implicit advantage of our approach is
that the generated samples are accompanied by object-level annotations. We
demonstrate how coherency and fidelity are preserved with our method through
experiments on the Multi-MNIST and CLEVR datasets; thereby outperforming
state-of-the-art multi-object generative methods. The efficacy of our approach
is demonstrated through application on medical imaging datasets, where we show
that augmenting the training set with generated samples using our approach
improves the performance of existing models.
Related papers
- ObjBlur: A Curriculum Learning Approach With Progressive Object-Level Blurring for Improved Layout-to-Image Generation [7.645341879105626]
We present Blur, a novel curriculum learning approach to improve layout-to-image generation models.
Our method is based on progressive object-level blurring, which effectively stabilizes training and enhances the quality of generated images.
arXiv Detail & Related papers (2024-04-11T08:50:12Z) - Unlocking Pre-trained Image Backbones for Semantic Image Synthesis [29.688029979801577]
We propose a new class of GAN discriminators for semantic image synthesis that generates highly realistic images.
Our model, which we dub DP-SIMS, achieves state-of-the-art results in terms of image quality and consistency with the input label maps on ADE-20K, COCO-Stuff, and Cityscapes.
arXiv Detail & Related papers (2023-12-20T09:39:19Z) - Learned representation-guided diffusion models for large-image generation [58.192263311786824]
We introduce a novel approach that trains diffusion models conditioned on embeddings from self-supervised learning (SSL)
Our diffusion models successfully project these features back to high-quality histopathology and remote sensing images.
Augmenting real data by generating variations of real images improves downstream accuracy for patch-level and larger, image-scale classification tasks.
arXiv Detail & Related papers (2023-12-12T14:45:45Z) - Improving Human-Object Interaction Detection via Virtual Image Learning [68.56682347374422]
Human-Object Interaction (HOI) detection aims to understand the interactions between humans and objects.
In this paper, we propose to alleviate the impact of such an unbalanced distribution via Virtual Image Leaning (VIL)
A novel label-to-image approach, Multiple Steps Image Creation (MUSIC), is proposed to create a high-quality dataset that has a consistent distribution with real images.
arXiv Detail & Related papers (2023-08-04T10:28:48Z) - Taming Encoder for Zero Fine-tuning Image Customization with
Text-to-Image Diffusion Models [55.04969603431266]
This paper proposes a method for generating images of customized objects specified by users.
The method is based on a general framework that bypasses the lengthy optimization required by previous approaches.
We demonstrate through experiments that our proposed method is able to synthesize images with compelling output quality, appearance diversity, and object fidelity.
arXiv Detail & Related papers (2023-04-05T17:59:32Z) - Semantic Image Synthesis via Diffusion Models [159.4285444680301]
Denoising Diffusion Probabilistic Models (DDPMs) have achieved remarkable success in various image generation tasks.
Recent work on semantic image synthesis mainly follows the emphde facto Generative Adversarial Nets (GANs)
arXiv Detail & Related papers (2022-06-30T18:31:51Z) - Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - VAE-Info-cGAN: Generating Synthetic Images by Combining Pixel-level and
Feature-level Geospatial Conditional Inputs [0.0]
We present a conditional generative model for synthesizing semantically rich images simultaneously conditioned on a pixellevel (PLC) and a featurelevel condition (FLC)
Experiments on a GPS dataset show that the proposed model can accurately generate various forms of macroscopic aggregates across different geographic locations.
arXiv Detail & Related papers (2020-12-08T03:46:19Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.