Ensembling with Deep Generative Views
- URL: http://arxiv.org/abs/2104.14551v1
- Date: Thu, 29 Apr 2021 17:58:35 GMT
- Title: Ensembling with Deep Generative Views
- Authors: Lucy Chai, Jun-Yan Zhu, Eli Shechtman, Phillip Isola, Richard Zhang
- Abstract summary: generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
- Score: 72.70801582346344
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent generative models can synthesize "views" of artificial images that
mimic real-world variations, such as changes in color or pose, simply by
learning from unlabeled image collections. Here, we investigate whether such
views can be applied to real images to benefit downstream analysis tasks such
as image classification. Using a pretrained generator, we first find the latent
code corresponding to a given real input image. Applying perturbations to the
code creates natural variations of the image, which can then be ensembled
together at test-time. We use StyleGAN2 as the source of generative
augmentations and investigate this setup on classification tasks involving
facial attributes, cat faces, and cars. Critically, we find that several design
decisions are required towards making this process work; the perturbation
procedure, weighting between the augmentations and original image, and training
the classifier on synthesized images can all impact the result. Currently, we
find that while test-time ensembling with GAN-based augmentations can offer
some small improvements, the remaining bottlenecks are the efficiency and
accuracy of the GAN reconstructions, coupled with classifier sensitivities to
artifacts in GAN-generated images.
Related papers
- Perceptual Artifacts Localization for Image Synthesis Tasks [59.638307505334076]
We introduce a novel dataset comprising 10,168 generated images, each annotated with per-pixel perceptual artifact labels.
A segmentation model, trained on our proposed dataset, effectively localizes artifacts across a range of tasks.
We propose an innovative zoom-in inpainting pipeline that seamlessly rectifies perceptual artifacts in the generated images.
arXiv Detail & Related papers (2023-10-09T10:22:08Z) - GH-Feat: Learning Versatile Generative Hierarchical Features from GANs [61.208757845344074]
We show that a generative feature learned from image synthesis exhibits great potentials in solving a wide range of computer vision tasks.
We first train an encoder by considering the pretrained StyleGAN generator as a learned loss function.
The visual features produced by our encoder, termed as Generative Hierarchical Features (GH-Feat), highly align with the layer-wise GAN representations.
arXiv Detail & Related papers (2023-01-12T21:59:46Z) - Traditional Classification Neural Networks are Good Generators: They are
Competitive with DDPMs and GANs [104.72108627191041]
We show that conventional neural network classifiers can generate high-quality images comparable to state-of-the-art generative models.
We propose a mask-based reconstruction module to make semantic gradients-aware to synthesize plausible images.
We show that our method is also applicable to text-to-image generation by regarding image-text foundation models.
arXiv Detail & Related papers (2022-11-27T11:25:35Z) - Automatic Correction of Internal Units in Generative Neural Networks [15.67941936262584]
Generative Adversarial Networks (GANs) have shown satisfactory performance in synthetic image generation.
There exists a number of generated images with defective visual patterns which are known as artifacts.
In this work, we devise a method that automatically identifies the internal units generating various types of artifact images.
arXiv Detail & Related papers (2021-04-13T11:46:45Z) - IMAGINE: Image Synthesis by Image-Guided Model Inversion [79.4691654458141]
We introduce an inversion based method, denoted as IMAge-Guided model INvErsion (IMAGINE), to generate high-quality and diverse images.
We leverage the knowledge of image semantics from a pre-trained classifier to achieve plausible generations.
IMAGINE enables the synthesis procedure to simultaneously 1) enforce semantic specificity constraints during the synthesis, 2) produce realistic images without generator training, and 3) give users intuitive control over the generation process.
arXiv Detail & Related papers (2021-04-13T02:00:24Z) - Synthesize-It-Classifier: Learning a Generative Classifier through
RecurrentSelf-analysis [9.029985847202667]
We show the generative capability of an image classifier network by synthesizing high-resolution, photo-realistic, and diverse images at scale.
The overall methodology, called Synthesize-It-Classifier (STIC), does not require an explicit generator network to estimate the density of the data distribution.
We demonstrate an Attentive-STIC network that shows an iterative drawing of synthesized images on the ImageNet dataset.
arXiv Detail & Related papers (2021-03-26T02:00:29Z) - Using latent space regression to analyze and leverage compositionality
in GANs [33.381584322411626]
We investigate regression into the latent space as a probe to understand the compositional properties of GANs.
We find that combining the regressor and a pretrained generator provides a strong image prior, allowing us to create composite images.
We find that the regression approach enables more localized editing of individual image parts compared to direct editing in the latent space.
arXiv Detail & Related papers (2021-03-18T17:58:01Z) - Generative Hierarchical Features from Synthesizing Images [65.66756821069124]
We show that learning to synthesize images can bring remarkable hierarchical visual features that are generalizable across a wide range of applications.
The visual feature produced by our encoder, termed as Generative Hierarchical Feature (GH-Feat), has strong transferability to both generative and discriminative tasks.
arXiv Detail & Related papers (2020-07-20T18:04:14Z) - Deep Snow: Synthesizing Remote Sensing Imagery with Generative
Adversarial Nets [0.5249805590164901]
generative adversarial networks (GANs) can be used to generate realistic pervasive changes in remote sensing imagery.
We investigate some transformation quality metrics based on deep embedding of the generated and real images.
arXiv Detail & Related papers (2020-05-18T17:05:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.