SyntheWorld: A Large-Scale Synthetic Dataset for Land Cover Mapping and
Building Change Detection
- URL: http://arxiv.org/abs/2309.01907v1
- Date: Tue, 5 Sep 2023 02:42:41 GMT
- Title: SyntheWorld: A Large-Scale Synthetic Dataset for Land Cover Mapping and
Building Change Detection
- Authors: Jian Song and Hongruixuan Chen and Naoto Yokoya
- Abstract summary: We present SyntheWorld, a synthetic dataset unparalleled in quality, diversity, and scale.
It includes 40,000 images with submeter-level pixels and fine-grained land cover annotations of eight categories.
We will release SyntheWorld to facilitate remote sensing image processing research.
- Score: 20.985372561774415
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Synthetic datasets, recognized for their cost effectiveness, play a pivotal
role in advancing computer vision tasks and techniques. However, when it comes
to remote sensing image processing, the creation of synthetic datasets becomes
challenging due to the demand for larger-scale and more diverse 3D models. This
complexity is compounded by the difficulties associated with real remote
sensing datasets, including limited data acquisition and high annotation costs,
which amplifies the need for high-quality synthetic alternatives. To address
this, we present SyntheWorld, a synthetic dataset unparalleled in quality,
diversity, and scale. It includes 40,000 images with submeter-level pixels and
fine-grained land cover annotations of eight categories, and it also provides
40,000 pairs of bitemporal image pairs with building change annotations for
building change detection task. We conduct experiments on multiple benchmark
remote sensing datasets to verify the effectiveness of SyntheWorld and to
investigate the conditions under which our synthetic data yield advantages. We
will release SyntheWorld to facilitate remote sensing image processing
research.
Related papers
- SynDroneVision: A Synthetic Dataset for Image-Based Drone Detection [3.9061053498250753]
We present SynDroneVision, a synthetic dataset specifically designed for RGB-based drone detection in surveillance applications.
Our findings demonstrate that SynDroneVision is a valuable resource for real-world data enrichment, achieving notable enhancements in model performance and robustness, while significantly reducing the time and costs of real-world data acquisition.
arXiv Detail & Related papers (2024-11-08T15:22:49Z) - SynRS3D: A Synthetic Dataset for Global 3D Semantic Understanding from Monocular Remote Sensing Imagery [17.364630812389038]
Global semantic 3D understanding from single-view high-resolution remote sensing (RS) imagery is crucial for Earth Observation (EO)
We develop a specialized synthetic data generation pipeline for EO and introduce SynRS3D, the largest synthetic RS 3D dataset.
SynRS3D comprises 69,667 high-resolution optical images that cover six different city styles worldwide and feature eight land cover types, precise height information, and building change masks.
arXiv Detail & Related papers (2024-06-26T08:04:42Z) - Hardness-Aware Scene Synthesis for Semi-Supervised 3D Object Detection [59.33188668341604]
3D object detection serves as the fundamental task of autonomous driving perception.
It is costly to obtain high-quality annotations for point cloud data.
We propose a hardness-aware scene synthesis (HASS) method to generate adaptive synthetic scenes.
arXiv Detail & Related papers (2024-05-27T17:59:23Z) - Best Practices and Lessons Learned on Synthetic Data [83.63271573197026]
The success of AI models relies on the availability of large, diverse, and high-quality datasets.
Synthetic data has emerged as a promising solution by generating artificial data that mimics real-world patterns.
arXiv Detail & Related papers (2024-04-11T06:34:17Z) - ContraNeRF: Generalizable Neural Radiance Fields for Synthetic-to-real
Novel View Synthesis via Contrastive Learning [102.46382882098847]
We first investigate the effects of synthetic data in synthetic-to-real novel view synthesis.
We propose to introduce geometry-aware contrastive learning to learn multi-view consistent features with geometric constraints.
Our method can render images with higher quality and better fine-grained details, outperforming existing generalizable novel view synthesis methods in terms of PSNR, SSIM, and LPIPS.
arXiv Detail & Related papers (2023-03-20T12:06:14Z) - Synthetic-to-Real Domain Adaptation for Action Recognition: A Dataset and Baseline Performances [76.34037366117234]
We introduce a new dataset called Robot Control Gestures (RoCoG-v2)
The dataset is composed of both real and synthetic videos from seven gesture classes.
We present results using state-of-the-art action recognition and domain adaptation algorithms.
arXiv Detail & Related papers (2023-03-17T23:23:55Z) - TRoVE: Transforming Road Scene Datasets into Photorealistic Virtual
Environments [84.6017003787244]
This work proposes a synthetic data generation pipeline to address the difficulties and domain-gaps present in simulated datasets.
We show that using annotations and visual cues from existing datasets, we can facilitate automated multi-modal data generation.
arXiv Detail & Related papers (2022-08-16T20:46:08Z) - MetaGraspNet: A Large-Scale Benchmark Dataset for Scene-Aware
Ambidextrous Bin Picking via Physics-based Metaverse Synthesis [72.85526892440251]
We introduce MetaGraspNet, a large-scale photo-realistic bin picking dataset constructed via physics-based metaverse synthesis.
The proposed dataset contains 217k RGBD images across 82 different article types, with full annotations for object detection, amodal perception, keypoint detection, manipulation order and ambidextrous grasp labels for a parallel-jaw and vacuum gripper.
We also provide a real dataset consisting of over 2.3k fully annotated high-quality RGBD images, divided into 5 levels of difficulties and an unseen object set to evaluate different object and layout properties.
arXiv Detail & Related papers (2022-08-08T08:15:34Z) - STPLS3D: A Large-Scale Synthetic and Real Aerial Photogrammetry 3D Point
Cloud Dataset [6.812704277866377]
We introduce a synthetic aerial photogrammetry point clouds generation pipeline.
Unlike generating synthetic data in virtual games, the proposed pipeline simulates the reconstruction process of the real environment.
We present a richly-annotated synthetic 3D aerial photogrammetry point cloud dataset.
arXiv Detail & Related papers (2022-03-17T03:50:40Z) - Synthetic Data for Model Selection [2.4499092754102874]
We show that synthetic data can be beneficial for model selection.
We introduce a novel method to calibrate the synthetic error estimation to fit that of the real domain.
arXiv Detail & Related papers (2021-05-03T09:52:03Z) - Synthetic Data and Hierarchical Object Detection in Overhead Imagery [0.0]
We develop novel synthetic data generation and augmentation techniques for enhancing low/zero-sample learning in satellite imagery.
To test the effectiveness of synthetic imagery, we employ it in the training of detection models and our two stage model, and evaluate the resulting models on real satellite images.
arXiv Detail & Related papers (2021-01-29T22:52:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.