Mask Conditional Synthetic Satellite Imagery
- URL: http://arxiv.org/abs/2302.04305v1
- Date: Wed, 8 Feb 2023 19:42:37 GMT
- Title: Mask Conditional Synthetic Satellite Imagery
- Authors: Van Anh Le, Varshini Reddy, Zixi Chen, Mengyuan Li, Xinran Tang,
Anthony Ortiz, Simone Fobi Nsutezo, Caleb Robinson
- Abstract summary: mask-conditional synthetic image generation model for creating synthetic satellite imagery datasets.
We show that it is possible to train an upstream conditional synthetic imagery generator, use that generator to create synthetic imagery with the land cover masks.
We find that incorporating a mixture of real and synthetic imagery acts as a data augmentation method, producing better models than using only real imagery.
- Score: 10.235751992415867
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper we propose a mask-conditional synthetic image generation model
for creating synthetic satellite imagery datasets. Given a dataset of real
high-resolution images and accompanying land cover masks, we show that it is
possible to train an upstream conditional synthetic imagery generator, use that
generator to create synthetic imagery with the land cover masks, then train a
downstream model on the synthetic imagery and land cover masks that achieves
similar test performance to a model that was trained with the real imagery.
Further, we find that incorporating a mixture of real and synthetic imagery
acts as a data augmentation method, producing better models than using only
real imagery (0.5834 vs. 0.5235 mIoU). Finally, we find that encouraging
diversity of outputs in the upstream model is a necessary component for
improved downstream task performance. We have released code for reproducing our
work on GitHub, see
https://github.com/ms-synthetic-satellite-image/synthetic-satellite-imagery .
Related papers
- The Unmet Promise of Synthetic Training Images: Using Retrieved Real Images Performs Better [39.57368843211441]
Every synthetic image ultimately originates from the upstream data used to train the generator.
We compare finetuning on task-relevant, targeted synthetic data generated by Stable Diffusion against finetuning on targeted real images retrieved directly from LAION-2B.
Our analysis suggests that this underperformance is partially due to generator artifacts and inaccurate task-relevant visual details in the synthetic images.
arXiv Detail & Related papers (2024-06-07T18:04:21Z) - Is Synthetic Image Useful for Transfer Learning? An Investigation into Data Generation, Volume, and Utilization [62.157627519792946]
We introduce a novel framework called bridged transfer, which initially employs synthetic images for fine-tuning a pre-trained model to improve its transferability.
We propose dataset style inversion strategy to improve the stylistic alignment between synthetic and real images.
Our proposed methods are evaluated across 10 different datasets and 5 distinct models, demonstrating consistent improvements.
arXiv Detail & Related papers (2024-03-28T22:25:05Z) - SynFog: A Photo-realistic Synthetic Fog Dataset based on End-to-end Imaging Simulation for Advancing Real-World Defogging in Autonomous Driving [48.27575423606407]
We introduce an end-to-end simulation pipeline designed to generate photo-realistic foggy images.
We present a new synthetic fog dataset named SynFog, which features both sky light and active lighting conditions.
Experimental results demonstrate that models trained on SynFog exhibit superior performance in visual perception and detection accuracy.
arXiv Detail & Related papers (2024-03-25T18:32:41Z) - FreeMask: Synthetic Images with Dense Annotations Make Stronger
Segmentation Models [62.009002395326384]
FreeMask resorts to synthetic images from generative models to ease the burden of data collection and annotation procedures.
We first synthesize abundant training images conditioned on the semantic masks provided by realistic datasets.
We investigate the role of synthetic images by joint training with real images, or pre-training for real images.
arXiv Detail & Related papers (2023-10-23T17:57:27Z) - Person Image Synthesis via Denoising Diffusion Model [116.34633988927429]
We show how denoising diffusion models can be applied for high-fidelity person image synthesis.
Our results on two large-scale benchmarks and a user study demonstrate the photorealism of our proposed approach under challenging scenarios.
arXiv Detail & Related papers (2022-11-22T18:59:50Z) - Is synthetic data from generative models ready for image recognition? [69.42645602062024]
We study whether and how synthetic images generated from state-of-the-art text-to-image generation models can be used for image recognition tasks.
We showcase the powerfulness and shortcomings of synthetic data from existing generative models, and propose strategies for better applying synthetic data for recognition tasks.
arXiv Detail & Related papers (2022-10-14T06:54:24Z) - DiVAE: Photorealistic Images Synthesis with Denoising Diffusion Decoder [73.1010640692609]
We propose a VQ-VAE architecture model with a diffusion decoder (DiVAE) to work as the reconstructing component in image synthesis.
Our model achieves state-of-the-art results and generates more photorealistic images specifically.
arXiv Detail & Related papers (2022-06-01T10:39:12Z) - Synthetic Data for Model Selection [2.4499092754102874]
We show that synthetic data can be beneficial for model selection.
We introduce a novel method to calibrate the synthetic error estimation to fit that of the real domain.
arXiv Detail & Related papers (2021-05-03T09:52:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.