Deep Snow: Synthesizing Remote Sensing Imagery with Generative
Adversarial Nets
- URL: http://arxiv.org/abs/2005.08892v1
- Date: Mon, 18 May 2020 17:05:00 GMT
- Title: Deep Snow: Synthesizing Remote Sensing Imagery with Generative
Adversarial Nets
- Authors: Christopher X. Ren, Amanda Ziemann, James Theiler, Alice M. S. Durieux
- Abstract summary: generative adversarial networks (GANs) can be used to generate realistic pervasive changes in remote sensing imagery.
We investigate some transformation quality metrics based on deep embedding of the generated and real images.
- Score: 0.5249805590164901
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work we demonstrate that generative adversarial networks (GANs) can
be used to generate realistic pervasive changes in remote sensing imagery, even
in an unpaired training setting. We investigate some transformation quality
metrics based on deep embedding of the generated and real images which enable
visualization and understanding of the training dynamics of the GAN, and may
provide a useful measure in terms of quantifying how distinguishable the
generated images are from real images. We also identify some artifacts
introduced by the GAN in the generated images, which are likely to contribute
to the differences seen between the real and generated samples in the deep
embedding feature space even in cases where the real and generated samples
appear perceptually similar.
Related papers
- ASAP: Interpretable Analysis and Summarization of AI-generated Image Patterns at Scale [20.12991230544801]
Generative image models have emerged as a promising technology to produce realistic images.
There is growing demand to empower users to effectively discern and comprehend patterns of AI-generated images.
We develop ASAP, an interactive visualization system that automatically extracts distinct patterns of AI-generated images.
arXiv Detail & Related papers (2024-04-03T18:20:41Z) - Rethinking the Up-Sampling Operations in CNN-based Generative Network
for Generalizable Deepfake Detection [86.97062579515833]
We introduce the concept of Neighboring Pixel Relationships(NPR) as a means to capture and characterize the generalized structural artifacts stemming from up-sampling operations.
A comprehensive analysis is conducted on an open-world dataset, comprising samples generated by tft28 distinct generative models.
This analysis culminates in the establishment of a novel state-of-the-art performance, showcasing a remarkable tft11.6% improvement over existing methods.
arXiv Detail & Related papers (2023-12-16T14:27:06Z) - In-Domain GAN Inversion for Faithful Reconstruction and Editability [132.68255553099834]
We propose in-domain GAN inversion, which consists of a domain-guided domain-regularized and a encoder to regularize the inverted code in the native latent space of the pre-trained GAN model.
We make comprehensive analyses on the effects of the encoder structure, the starting inversion point, as well as the inversion parameter space, and observe the trade-off between the reconstruction quality and the editing property.
arXiv Detail & Related papers (2023-09-25T08:42:06Z) - Joint Learning of Deep Texture and High-Frequency Features for
Computer-Generated Image Detection [24.098604827919203]
We propose a joint learning strategy with deep texture and high-frequency features for CG image detection.
A semantic segmentation map is generated to guide the affine transformation operation.
The combination of the original image and the high-frequency components of the original and rendered images are fed into a multi-branch neural network equipped with attention mechanisms.
arXiv Detail & Related papers (2022-09-07T17:30:40Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z) - Unsupervised Discovery of Disentangled Manifolds in GANs [74.24771216154105]
Interpretable generation process is beneficial to various image editing applications.
We propose a framework to discover interpretable directions in the latent space given arbitrary pre-trained generative adversarial networks.
arXiv Detail & Related papers (2020-11-24T02:18:08Z) - CNN Detection of GAN-Generated Face Images based on Cross-Band
Co-occurrences Analysis [34.41021278275805]
Last-generation GAN models allow to generate synthetic images which are visually indistinguishable from natural ones.
We propose a method for distinguishing GAN-generated from natural images by exploiting inconsistencies among spectral bands.
arXiv Detail & Related papers (2020-07-25T10:55:04Z) - Generative Hierarchical Features from Synthesizing Images [65.66756821069124]
We show that learning to synthesize images can bring remarkable hierarchical visual features that are generalizable across a wide range of applications.
The visual feature produced by our encoder, termed as Generative Hierarchical Feature (GH-Feat), has strong transferability to both generative and discriminative tasks.
arXiv Detail & Related papers (2020-07-20T18:04:14Z) - InterFaceGAN: Interpreting the Disentangled Face Representation Learned
by GANs [73.27299786083424]
We propose a framework called InterFaceGAN to interpret the disentangled face representation learned by state-of-the-art GAN models.
We first find that GANs learn various semantics in some linear subspaces of the latent space.
We then conduct a detailed study on the correlation between different semantics and manage to better disentangle them via subspace projection.
arXiv Detail & Related papers (2020-05-18T18:01:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.