Conditional Progressive Generative Adversarial Network for satellite
image generation
- URL: http://arxiv.org/abs/2211.15303v1
- Date: Mon, 28 Nov 2022 13:33:53 GMT
- Title: Conditional Progressive Generative Adversarial Network for satellite
image generation
- Authors: Renato Cardoso, Sofia Vallecorsa, Edoardo Nemni
- Abstract summary: We formulate the image generation task as completion of an image where one out of three corners is missing.
We then extend this approach to iteratively build larger images with the same level of detail.
Our goal is to obtain a scalable methodology to generate high resolution samples typically found in satellite imagery data sets.
- Score: 0.7734726150561089
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image generation and image completion are rapidly evolving fields, thanks to
machine learning algorithms that are able to realistically replace missing
pixels. However, generating large high resolution images, with a large level of
details, presents important computational challenges. In this work, we
formulate the image generation task as completion of an image where one out of
three corners is missing. We then extend this approach to iteratively build
larger images with the same level of detail. Our goal is to obtain a scalable
methodology to generate high resolution samples typically found in satellite
imagery data sets. We introduce a conditional progressive Generative
Adversarial Networks (GAN), that generates the missing tile in an image, using
as input three initial adjacent tiles encoded in a latent vector by a
Wasserstein auto-encoder. We focus on a set of images used by the United
Nations Satellite Centre (UNOSAT) to train flood detection tools, and validate
the quality of synthetic images in a realistic setup.
Related papers
- SCube: Instant Large-Scale Scene Reconstruction using VoxSplats [55.383993296042526]
We present SCube, a novel method for reconstructing large-scale 3D scenes (geometry, appearance, and semantics) from a sparse set of posed images.
Our method encodes reconstructed scenes using a novel representation VoxSplat, which is a set of 3D Gaussians supported on a high-resolution sparse-voxel scaffold.
arXiv Detail & Related papers (2024-10-26T00:52:46Z) - Zero-Shot Detection of AI-Generated Images [54.01282123570917]
We propose a zero-shot entropy-based detector (ZED) to detect AI-generated images.
Inspired by recent works on machine-generated text detection, our idea is to measure how surprising the image under analysis is compared to a model of real images.
ZED achieves an average improvement of more than 3% over the SoTA in terms of accuracy.
arXiv Detail & Related papers (2024-09-24T08:46:13Z) - High-Resolution GAN Inversion for Degraded Images in Large Diverse
Datasets [39.21692649763314]
In this paper, we present a novel GAN inversion framework that utilizes the powerful generative ability of StyleGAN-XL.
To ease the inversion challenge with StyleGAN-XL, Clustering & Regularize Inversion (CRI) is proposed.
We validate our CRI scheme on multiple restoration tasks (i.e., inpainting, colorization, and super-resolution) of complex natural images, and show preferable quantitative and qualitative results.
arXiv Detail & Related papers (2023-02-07T11:24:11Z) - Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - Any-resolution Training for High-resolution Image Synthesis [55.19874755679901]
Generative models operate at fixed resolution, even though natural images come in a variety of sizes.
We argue that every pixel matters and create datasets with variable-size images, collected at their native resolutions.
We introduce continuous-scale training, a process that samples patches at random scales to train a new generator with variable output resolutions.
arXiv Detail & Related papers (2022-04-14T17:59:31Z) - Learning to Rearrange Voxels in Binary Segmentation Masks for Smooth
Manifold Triangulation [0.8968417883198374]
We propose that high-resolution images can be reconstructed in a coarse-to-fine fashion, where a deep learning algorithm is only responsible for generating a coarse representation of the image.
For producing the high-resolution outcome, we propose two novel methods: learned voxel rearrangement of the coarse output and hierarchical image synthesis.
Compared to the coarse output, the high-resolution counterpart allows for smooth surface triangulation, which can be 3D-printed in the highest possible quality.
arXiv Detail & Related papers (2021-08-11T15:11:34Z) - Spatially-Adaptive Pixelwise Networks for Fast Image Translation [57.359250882770525]
We introduce a new generator architecture, aimed at fast and efficient high-resolution image-to-image translation.
We use pixel-wise networks; that is, each pixel is processed independently of others.
Our model is up to 18x faster than state-of-the-art baselines.
arXiv Detail & Related papers (2020-12-05T10:02:03Z) - Free-Form Image Inpainting via Contrastive Attention Network [64.05544199212831]
In image inpainting tasks, masks with any shapes can appear anywhere in images which form complex patterns.
It is difficult for encoders to capture such powerful representations under this complex situation.
We propose a self-supervised Siamese inference network to improve the robustness and generalization.
arXiv Detail & Related papers (2020-10-29T14:46:05Z) - Unsupervised Real Image Super-Resolution via Generative Variational
AutoEncoder [47.53609520395504]
We revisit the classic example based image super-resolution approaches and come up with a novel generative model for perceptual image super-resolution.
We propose a joint image denoising and super-resolution model via Variational AutoEncoder.
With the aid of the discriminator, an additional overhead of super-resolution subnetwork is attached to super-resolve the denoised image with photo-realistic visual quality.
arXiv Detail & Related papers (2020-04-27T13:49:36Z) - Deep Attentive Generative Adversarial Network for Photo-Realistic Image
De-Quantization [25.805568996596783]
De-quantization can improve the visual quality of low bit-depth image to display on high bit-depth screen.
This paper proposes DAGAN algorithm to perform super-resolution on image intensity resolution.
DenseResAtt module consists of dense residual blocks armed with self-attention mechanism.
arXiv Detail & Related papers (2020-04-07T06:45:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.