Generate Your Own Scotland: Satellite Image Generation Conditioned on
Maps
- URL: http://arxiv.org/abs/2308.16648v1
- Date: Thu, 31 Aug 2023 11:44:40 GMT
- Title: Generate Your Own Scotland: Satellite Image Generation Conditioned on
Maps
- Authors: Miguel Espinosa, Elliot J. Crowley
- Abstract summary: We show that state-of-the-art pretrained diffusion models can be conditioned on cartographic data to generate realistic satellite images.
We provide two large datasets of paired OpenStreetMap images and satellite views over the region of Mainland Scotland and the Central Belt.
- Score: 5.49341063007719
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Despite recent advancements in image generation, diffusion models still
remain largely underexplored in Earth Observation. In this paper we show that
state-of-the-art pretrained diffusion models can be conditioned on cartographic
data to generate realistic satellite images. We provide two large datasets of
paired OpenStreetMap images and satellite views over the region of Mainland
Scotland and the Central Belt. We train a ControlNet model and qualitatively
evaluate the results, demonstrating that both image quality and map fidelity
are possible. Finally, we provide some insights on the opportunities and
challenges of applying these models for remote sensing. Our model weights and
code for creating the dataset are publicly available at
https://github.com/miquel-espinosa/map-sat.
Related papers
- SatSynth: Augmenting Image-Mask Pairs through Diffusion Models for Aerial Semantic Segmentation [69.42764583465508]
We explore the potential of generative image diffusion to address the scarcity of annotated data in earth observation tasks.
To the best of our knowledge, we are the first to generate both images and corresponding masks for satellite segmentation.
arXiv Detail & Related papers (2024-03-25T10:30:22Z) - Sat2Scene: 3D Urban Scene Generation from Satellite Images with Diffusion [77.34078223594686]
We propose a novel architecture for direct 3D scene generation by introducing diffusion models into 3D sparse representations and combining them with neural rendering techniques.
Specifically, our approach generates texture colors at the point level for a given geometry using a 3D diffusion model first, which is then transformed into a scene representation in a feed-forward manner.
Experiments in two city-scale datasets show that our model demonstrates proficiency in generating photo-realistic street-view image sequences and cross-view urban scenes from satellite imagery.
arXiv Detail & Related papers (2024-01-19T16:15:37Z) - DiffusionSat: A Generative Foundation Model for Satellite Imagery [63.2807119794691]
We present DiffusionSat, to date the largest generative foundation model trained on a collection of publicly available large, high-resolution remote sensing datasets.
Our method produces realistic samples and can be used to solve multiple generative tasks including temporal generation, superresolution given multi-spectral inputs and in-painting.
arXiv Detail & Related papers (2023-12-06T16:53:17Z) - Diffusion Models for Interferometric Satellite Aperture Radar [73.01013149014865]
Probabilistic Diffusion Models (PDMs) have recently emerged as a very promising class of generative models.
Here, we leverage PDMs to generate several radar-based satellite image datasets.
We show that PDMs succeed in generating images with complex and realistic structures, but that sampling time remains an issue.
arXiv Detail & Related papers (2023-08-31T16:26:17Z) - Embedding Earth: Self-supervised contrastive pre-training for dense land
cover classification [61.44538721707377]
We present Embedding Earth a self-supervised contrastive pre-training method for leveraging the large availability of satellite imagery.
We observe significant improvements up to 25% absolute mIoU when pre-trained with our proposed method.
We find that learnt features can generalize between disparate regions opening up the possibility of using the proposed pre-training scheme.
arXiv Detail & Related papers (2022-03-11T16:14:14Z) - Image to Image Translation : Generating maps from satellite images [18.276666800052006]
Image to image translation is employed to convert satellite image to corresponding map.
Generative adversarial network, Conditional adversarial networks and Co-Variational Auto encoders are used.
We are training our model on Conditional Generative Adversarial Network which comprises of Generator model which generates fake images.
arXiv Detail & Related papers (2021-05-19T16:58:04Z) - Generating Physically-Consistent Satellite Imagery for Climate Visualizations [53.61991820941501]
We train a generative adversarial network to create synthetic satellite imagery of future flooding and reforestation events.
A pure deep learning-based model can generate flood visualizations but hallucinates floods at locations that were not susceptible to flooding.
We publish our code and dataset for segmentation guided image-to-image translation in Earth observation.
arXiv Detail & Related papers (2021-04-10T15:00:15Z) - Procedural 3D Terrain Generation using Generative Adversarial Networks [0.0]
We use Generative Adversarial Networks (GAN) to yield realistic 3D environments based on the distribution of remotely sensed images of landscapes, captured by satellites or drones.
We are able to construct 3D scenery consisting of a plausible height distribution and colorization, in relation to the remotely sensed landscapes provided during training.
arXiv Detail & Related papers (2020-10-13T14:15:10Z) - Model Generalization in Deep Learning Applications for Land Cover
Mapping [19.570391828806567]
We show that when deep learning models are trained on data from specific continents/seasons, there is a high degree of variability in model performance on out-of-sample continents/seasons.
This suggests that just because a model accurately predicts land-use classes in one continent or season does not mean that the model will accurately predict land-use classes in a different continent or season.
arXiv Detail & Related papers (2020-08-09T01:50:52Z) - Segmentation of Satellite Imagery using U-Net Models for Land Cover
Classification [2.28438857884398]
The focus of this paper is using a convolutional machine learning model with a modified U-Net structure for creating land cover classification mapping based on satellite imagery.
The aim of the research is to train and test convolutional models for automatic land cover mapping and to assess their usability in increasing land cover mapping accuracy and change detection.
arXiv Detail & Related papers (2020-03-05T20:07:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.