Image to Image Translation : Generating maps from satellite images
- URL: http://arxiv.org/abs/2105.09253v1
- Date: Wed, 19 May 2021 16:58:04 GMT
- Title: Image to Image Translation : Generating maps from satellite images
- Authors: Vaishali Ingale, Rishabh Singh, Pragati Patwal
- Abstract summary: Image to image translation is employed to convert satellite image to corresponding map.
Generative adversarial network, Conditional adversarial networks and Co-Variational Auto encoders are used.
We are training our model on Conditional Generative Adversarial Network which comprises of Generator model which generates fake images.
- Score: 18.276666800052006
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Generation of maps from satellite images is conventionally done by a range of
tools. Maps became an important part of life whose conversion from satellite
images may be a bit expensive but Generative models can pander to this
challenge. These models aims at finding the patterns between the input and
output image. Image to image translation is employed to convert satellite image
to corresponding map. Different techniques for image to image translations like
Generative adversarial network, Conditional adversarial networks and
Co-Variational Auto encoders are used to generate the corresponding
human-readable maps for that region, which takes a satellite image at a given
zoom level as its input. We are training our model on Conditional Generative
Adversarial Network which comprises of Generator model which which generates
fake images while the discriminator tries to classify the image as real or fake
and both these models are trained synchronously in adversarial manner where
both try to fool each other and result in enhancing model performance.
Related papers
- Sat2Scene: 3D Urban Scene Generation from Satellite Images with Diffusion [77.34078223594686]
We propose a novel architecture for direct 3D scene generation by introducing diffusion models into 3D sparse representations and combining them with neural rendering techniques.
Specifically, our approach generates texture colors at the point level for a given geometry using a 3D diffusion model first, which is then transformed into a scene representation in a feed-forward manner.
Experiments in two city-scale datasets show that our model demonstrates proficiency in generating photo-realistic street-view image sequences and cross-view urban scenes from satellite imagery.
arXiv Detail & Related papers (2024-01-19T16:15:37Z) - Unlocking Pre-trained Image Backbones for Semantic Image Synthesis [29.688029979801577]
We propose a new class of GAN discriminators for semantic image synthesis that generates highly realistic images.
Our model, which we dub DP-SIMS, achieves state-of-the-art results in terms of image quality and consistency with the input label maps on ADE-20K, COCO-Stuff, and Cityscapes.
arXiv Detail & Related papers (2023-12-20T09:39:19Z) - DiffusionSat: A Generative Foundation Model for Satellite Imagery [63.2807119794691]
We present DiffusionSat, to date the largest generative foundation model trained on a collection of publicly available large, high-resolution remote sensing datasets.
Our method produces realistic samples and can be used to solve multiple generative tasks including temporal generation, superresolution given multi-spectral inputs and in-painting.
arXiv Detail & Related papers (2023-12-06T16:53:17Z) - Conditional Progressive Generative Adversarial Network for satellite
image generation [0.7734726150561089]
We formulate the image generation task as completion of an image where one out of three corners is missing.
We then extend this approach to iteratively build larger images with the same level of detail.
Our goal is to obtain a scalable methodology to generate high resolution samples typically found in satellite imagery data sets.
arXiv Detail & Related papers (2022-11-28T13:33:53Z) - Dual Pyramid Generative Adversarial Networks for Semantic Image
Synthesis [94.76988562653845]
The goal of semantic image synthesis is to generate photo-realistic images from semantic label maps.
Current state-of-the-art approaches, however, still struggle to generate realistic objects in images at various scales.
We propose a Dual Pyramid Generative Adversarial Network (DP-GAN) that learns the conditioning of spatially-adaptive normalization blocks at all scales jointly.
arXiv Detail & Related papers (2022-10-08T18:45:44Z) - Geometry-Guided Street-View Panorama Synthesis from Satellite Imagery [80.6282101835164]
We present a new approach for synthesizing a novel street-view panorama given an overhead satellite image.
Our method generates a Google's omnidirectional street-view type panorama, as if it is captured from the same geographical location as the center of the satellite patch.
arXiv Detail & Related papers (2021-03-02T10:27:05Z) - Semantic Segmentation of Medium-Resolution Satellite Imagery using
Conditional Generative Adversarial Networks [3.4797121357690153]
We propose Conditional Generative Adversarial Networks (CGAN) based approach of image-to-image translation for high-resolution satellite imagery.
We find that the CGAN model outperforms the CNN model of similar complexity by a significant margin on an unseen imbalanced test dataset.
arXiv Detail & Related papers (2020-12-05T18:18:45Z) - Procedural 3D Terrain Generation using Generative Adversarial Networks [0.0]
We use Generative Adversarial Networks (GAN) to yield realistic 3D environments based on the distribution of remotely sensed images of landscapes, captured by satellites or drones.
We are able to construct 3D scenery consisting of a plausible height distribution and colorization, in relation to the remotely sensed landscapes provided during training.
arXiv Detail & Related papers (2020-10-13T14:15:10Z) - Swapping Autoencoder for Deep Image Manipulation [94.33114146172606]
We propose the Swapping Autoencoder, a deep model designed specifically for image manipulation.
The key idea is to encode an image with two independent components and enforce that any swapped combination maps to a realistic image.
Experiments on multiple datasets show that our model produces better results and is substantially more efficient compared to recent generative models.
arXiv Detail & Related papers (2020-07-01T17:59:57Z) - Domain Adaptation for Image Dehazing [72.15994735131835]
Most existing methods train a dehazing model on synthetic hazy images, which are less able to generalize well to real hazy images due to domain shift.
We propose a domain adaptation paradigm, which consists of an image translation module and two image dehazing modules.
Experimental results on both synthetic and real-world images demonstrate that our model performs favorably against the state-of-the-art dehazing algorithms.
arXiv Detail & Related papers (2020-05-10T13:54:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.