Image Generators with Conditionally-Independent Pixel Synthesis
- URL: http://arxiv.org/abs/2011.13775v1
- Date: Fri, 27 Nov 2020 15:16:11 GMT
- Title: Image Generators with Conditionally-Independent Pixel Synthesis
- Authors: Ivan Anokhin, Kirill Demochkin, Taras Khakhulin, Gleb Sterkin, Victor
Lempitsky, Denis Korzhenkov
- Abstract summary: We present a new architecture for image generators, where the color value at each pixel is computed independently.
No spatial convolutions or similar operations that propagate information across pixels are involved during the synthesis.
We analyze the modeling capabilities of such generators when trained in an adversarial fashion, and observe the new generators to achieve similar generation quality to state-of-the-art convolutional generators.
- Score: 10.792933031825527
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing image generator networks rely heavily on spatial convolutions and,
optionally, self-attention blocks in order to gradually synthesize images in a
coarse-to-fine manner. Here, we present a new architecture for image
generators, where the color value at each pixel is computed independently given
the value of a random latent vector and the coordinate of that pixel. No
spatial convolutions or similar operations that propagate information across
pixels are involved during the synthesis. We analyze the modeling capabilities
of such generators when trained in an adversarial fashion, and observe the new
generators to achieve similar generation quality to state-of-the-art
convolutional generators. We also investigate several interesting properties
unique to the new architecture.
Related papers
- AnomalyFactory: Regard Anomaly Generation as Unsupervised Anomaly Localization [3.180143442781838]
AnomalyFactory unifies anomaly generation and localization with same network architecture.
Comprehensive experiments carried out on 5 datasets, including MVTecAD, VisA, MVTecLOCO, MADSim and RealIAD.
arXiv Detail & Related papers (2024-08-18T16:40:11Z) - FunkNN: Neural Interpolation for Functional Generation [23.964801524703052]
FunkNN is a new convolutional network which learns to reconstruct continuous images at arbitrary coordinates and can be applied to any image dataset.
We show that FunkNN generates high-quality continuous images and exhibits strong out-of-distribution performance thanks to its patch-based design.
arXiv Detail & Related papers (2022-12-20T16:37:20Z) - Collaging Class-specific GANs for Semantic Image Synthesis [68.87294033259417]
We propose a new approach for high resolution semantic image synthesis.
It consists of one base image generator and multiple class-specific generators.
Experiments show that our approach can generate high quality images in high resolution.
arXiv Detail & Related papers (2021-10-08T17:46:56Z) - Texture Generation with Neural Cellular Automata [64.70093734012121]
We learn a texture generator from a single template image.
We make claims that the behaviour exhibited by the NCA model is a learned, distributed, local algorithm to generate a texture.
arXiv Detail & Related papers (2021-05-15T22:05:46Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z) - Improved Image Generation via Sparse Modeling [27.66648389933265]
We show that generators can be viewed as manifestations of the Convolutional Sparse Coding (CSC) and its Multi-Layered version (ML-CSC) synthesis processes.
We leverage this observation by explicitly enforcing a sparsifying regularization on appropriately chosen activation layers in the generator.
arXiv Detail & Related papers (2021-04-01T13:52:40Z) - MOGAN: Morphologic-structure-aware Generative Learning from a Single
Image [59.59698650663925]
Recently proposed generative models complete training based on only one image.
We introduce a MOrphologic-structure-aware Generative Adversarial Network named MOGAN that produces random samples with diverse appearances.
Our approach focuses on internal features including the maintenance of rational structures and variation on appearance.
arXiv Detail & Related papers (2021-03-04T12:45:23Z) - Spatially-Adaptive Pixelwise Networks for Fast Image Translation [57.359250882770525]
We introduce a new generator architecture, aimed at fast and efficient high-resolution image-to-image translation.
We use pixel-wise networks; that is, each pixel is processed independently of others.
Our model is up to 18x faster than state-of-the-art baselines.
arXiv Detail & Related papers (2020-12-05T10:02:03Z) - Adversarial Generation of Continuous Images [31.92891885615843]
In this paper, we propose two novel architectural techniques for building INR-based image decoders.
We use them to build a state-of-the-art continuous image GAN.
Our proposed INR-GAN architecture improves the performance of continuous image generators by several times.
arXiv Detail & Related papers (2020-11-24T11:06:40Z) - Generating Annotated High-Fidelity Images Containing Multiple Coherent
Objects [10.783993190686132]
We propose a multi-object generation framework that can synthesize images with multiple objects without explicitly requiring contextual information.
We demonstrate how coherency and fidelity are preserved with our method through experiments on the Multi-MNIST and CLEVR datasets.
arXiv Detail & Related papers (2020-06-22T11:33:55Z) - Deformable Generator Networks: Unsupervised Disentanglement of
Appearance and Geometry [93.02523642523477]
We present a deformable generator model to disentangle the appearance and geometric information for both image and video data.
The appearance generator network models the information related to appearance, including color, illumination, identity or category.
The geometric generator performs geometric warping, such as rotation and stretching, through generating deformation field.
arXiv Detail & Related papers (2018-06-16T21:17:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.