Synthesizing Optical and SAR Imagery From Land Cover Maps and Auxiliary
Raster Data
- URL: http://arxiv.org/abs/2011.11314v2
- Date: Tue, 25 May 2021 13:25:48 GMT
- Title: Synthesizing Optical and SAR Imagery From Land Cover Maps and Auxiliary
Raster Data
- Authors: Gerald Baier and Antonin Deschemps and Michael Schmitt and Naoto
Yokoya
- Abstract summary: We synthesize both optical RGB and synthetic aperture radar (SAR) remote sensing images from land cover maps and auxiliary data using generative adversarial networks (GANs)
In remote sensing, many types of data, such as digital elevation models (DEMs) or precipitation maps, are often not reflected in land cover maps but still influence image content or structure.
Our method successfully synthesizes medium (10 m) and high (1 m) resolution images when trained with the corresponding data set.
- Score: 14.407683537373325
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We synthesize both optical RGB and synthetic aperture radar (SAR) remote
sensing images from land cover maps and auxiliary raster data using generative
adversarial networks (GANs). In remote sensing, many types of data, such as
digital elevation models (DEMs) or precipitation maps, are often not reflected
in land cover maps but still influence image content or structure. Including
such data in the synthesis process increases the quality of the generated
images and exerts more control on their characteristics. Spatially adaptive
normalization layers fuse both inputs and are applied to a full-blown generator
architecture consisting of encoder and decoder to take full advantage of the
information content in the auxiliary raster data. Our method successfully
synthesizes medium (10 m) and high (1 m) resolution images when trained with
the corresponding data set. We show the advantage of data fusion of land cover
maps and auxiliary information using mean intersection over unions (mIoUs),
pixel accuracy, and Fr\'echet inception distances (FIDs) using pretrained U-Net
segmentation models. Handpicked images exemplify how fusing information avoids
ambiguities in the synthesized images. By slightly editing the input, our
method can be used to synthesize realistic changes, i.e., raising the water
levels. The source code is available at https://github.com/gbaier/rs_img_synth
and we published the newly created high-resolution dataset at
https://ieee-dataport.org/open-access/geonrw.
Related papers
- Diffusion Models for Interferometric Satellite Aperture Radar [73.01013149014865]
Probabilistic Diffusion Models (PDMs) have recently emerged as a very promising class of generative models.
Here, we leverage PDMs to generate several radar-based satellite image datasets.
We show that PDMs succeed in generating images with complex and realistic structures, but that sampling time remains an issue.
arXiv Detail & Related papers (2023-08-31T16:26:17Z) - DatasetDM: Synthesizing Data with Perception Annotations Using Diffusion
Models [61.906934570771256]
We present a generic dataset generation model that can produce diverse synthetic images and perception annotations.
Our method builds upon the pre-trained diffusion model and extends text-guided image synthesis to perception data generation.
We show that the rich latent code of the diffusion model can be effectively decoded as accurate perception annotations using a decoder module.
arXiv Detail & Related papers (2023-08-11T14:38:11Z) - RTMV: A Ray-Traced Multi-View Synthetic Dataset for Novel View Synthesis [104.53930611219654]
We present a large-scale synthetic dataset for novel view synthesis consisting of 300k images rendered from nearly 2000 complex scenes.
The dataset is orders of magnitude larger than existing synthetic datasets for novel view synthesis.
Using 4 distinct sources of high-quality 3D meshes, the scenes of our dataset exhibit challenging variations in camera views, lighting, shape, materials, and textures.
arXiv Detail & Related papers (2022-05-14T13:15:32Z) - Simple and Effective Synthesis of Indoor 3D Scenes [78.95697556834536]
We study the problem of immersive 3D indoor scenes from one or more images.
Our aim is to generate high-resolution images and videos from novel viewpoints.
We propose an image-to-image GAN that maps directly from reprojections of incomplete point clouds to full high-resolution RGB-D images.
arXiv Detail & Related papers (2022-04-06T17:54:46Z) - Learning Neural Light Fields with Ray-Space Embedding Networks [51.88457861982689]
We propose a novel neural light field representation that is compact and directly predicts integrated radiance along rays.
Our method achieves state-of-the-art quality on dense forward-facing datasets such as the Stanford Light Field dataset.
arXiv Detail & Related papers (2021-12-02T18:59:51Z) - Automatic Feature Highlighting in Noisy RES Data With CycleGAN [0.0]
Radio echo sounding (RES) is a common technique used in subsurface glacial imaging, which provides insight into the underlying rock and ice.
Researchers most often use a combination of manual interpretation and filtering techniques to denoise data.
Fully Convolutional Networks have been proposed as an automated alternative to identify layer boundaries in radargrams.
Here, the authors propose a GAN based model to interpolate layer boundaries through noise and highlight layers in two-dimensional glacial RES data.
arXiv Detail & Related papers (2021-08-25T15:03:47Z) - Learning Topology from Synthetic Data for Unsupervised Depth Completion [66.26787962258346]
We present a method for inferring dense depth maps from images and sparse depth measurements.
We learn the association of sparse point clouds with dense natural shapes, and using the image as evidence to validate the predicted depth map.
arXiv Detail & Related papers (2021-06-06T00:21:12Z) - Synthetic Glacier SAR Image Generation from Arbitrary Masks Using
Pix2Pix Algorithm [12.087729834358928]
Supervised machine learning requires a large amount of labeled data to achieve proper test results.
In this work, we propose to alleviate the issue of limited training data by generating synthetic SAR images with the pix2pix algorithm.
We present different models, perform a comparative study and demonstrate that this approach synthesizes convincing glaciers in SAR images with promising qualitative and quantitative results.
arXiv Detail & Related papers (2021-01-08T23:30:00Z) - Can Synthetic Data Improve Object Detection Results for Remote Sensing
Images? [15.466412729455874]
We propose the use of realistic synthetic data with a wide distribution to improve the performance of remote sensing image aircraft detection.
We randomly set the parameters during rendering, such as the size of the instance and the class of background images.
In order to make the synthetic images more realistic, we refine the synthetic images at the pixel level using CycleGAN with real unlabeled images.
arXiv Detail & Related papers (2020-06-09T02:23:22Z) - DeepDualMapper: A Gated Fusion Network for Automatic Map Extraction
using Aerial Images and Trajectories [28.89392735657318]
We propose a deep convolutional neural network called DeepDualMapper to fuse aerial image and GPS trajectory data.
Our experiments demonstrate that DeepDualMapper can fuse the information of images and trajectories much more effectively than existing approaches.
arXiv Detail & Related papers (2020-02-17T08:33:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.