STPLS3D: A Large-Scale Synthetic and Real Aerial Photogrammetry 3D Point
Cloud Dataset
- URL: http://arxiv.org/abs/2203.09065v1
- Date: Thu, 17 Mar 2022 03:50:40 GMT
- Title: STPLS3D: A Large-Scale Synthetic and Real Aerial Photogrammetry 3D Point
Cloud Dataset
- Authors: Meida Chen, Qingyong Hu, Thomas Hugues, Andrew Feng, Yu Hou, Kyle
McCullough, Lucio Soibelman
- Abstract summary: We introduce a synthetic aerial photogrammetry point clouds generation pipeline.
Unlike generating synthetic data in virtual games, the proposed pipeline simulates the reconstruction process of the real environment.
We present a richly-annotated synthetic 3D aerial photogrammetry point cloud dataset.
- Score: 6.812704277866377
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Although various 3D datasets with different functions and scales have been
proposed recently, it remains challenging for individuals to complete the whole
pipeline of large-scale data collection, sanitization, and annotation.
Moreover, the created datasets usually suffer from extremely imbalanced class
distribution or partial low-quality data samples. Motivated by this, we explore
the procedurally synthetic 3D data generation paradigm to equip individuals
with the full capability of creating large-scale annotated photogrammetry point
clouds. Specifically, we introduce a synthetic aerial photogrammetry point
clouds generation pipeline that takes full advantage of open geospatial data
sources and off-the-shelf commercial packages. Unlike generating synthetic data
in virtual games, where the simulated data usually have limited gaming
environments created by artists, the proposed pipeline simulates the
reconstruction process of the real environment by following the same UAV flight
pattern on different synthetic terrain shapes and building densities, which
ensure similar quality, noise pattern, and diversity with real data. In
addition, the precise semantic and instance annotations can be generated fully
automatically, avoiding the expensive and time-consuming manual annotation.
Based on the proposed pipeline, we present a richly-annotated synthetic 3D
aerial photogrammetry point cloud dataset, termed STPLS3D, with more than 16
$km^2$ of landscapes and up to 18 fine-grained semantic categories. For
verification purposes, we also provide a parallel dataset collected from four
areas in the real environment. Extensive experiments conducted on our datasets
demonstrate the effectiveness and quality of the proposed synthetic dataset.
Related papers
- SyntheOcc: Synthesize Geometric-Controlled Street View Images through 3D Semantic MPIs [34.41011015930057]
SyntheOcc addresses the challenge of how to efficiently encode 3D geometric information as conditional input to a 2D diffusion model.
Our approach innovatively incorporates 3D semantic multi-plane images (MPIs) to provide comprehensive and spatially aligned 3D scene descriptions.
arXiv Detail & Related papers (2024-10-01T02:29:24Z) - SynRS3D: A Synthetic Dataset for Global 3D Semantic Understanding from Monocular Remote Sensing Imagery [17.364630812389038]
Global semantic 3D understanding from single-view high-resolution remote sensing (RS) imagery is crucial for Earth Observation (EO)
We develop a specialized synthetic data generation pipeline for EO and introduce SynRS3D, the largest synthetic RS 3D dataset.
SynRS3D comprises 69,667 high-resolution optical images that cover six different city styles worldwide and feature eight land cover types, precise height information, and building change masks.
arXiv Detail & Related papers (2024-06-26T08:04:42Z) - Enhancing Generalizability of Representation Learning for Data-Efficient 3D Scene Understanding [50.448520056844885]
We propose a generative Bayesian network to produce diverse synthetic scenes with real-world patterns.
A series of experiments robustly display our method's consistent superiority over existing state-of-the-art pre-training approaches.
arXiv Detail & Related papers (2024-06-17T07:43:53Z) - Outdoor Scene Extrapolation with Hierarchical Generative Cellular Automata [70.9375320609781]
We aim to generate fine-grained 3D geometry from large-scale sparse LiDAR scans, abundantly captured by autonomous vehicles (AV)
We propose hierarchical Generative Cellular Automata (hGCA), a spatially scalable 3D generative model, which grows geometry with local kernels following, in a coarse-to-fine manner, equipped with a light-weight planner to induce global consistency.
arXiv Detail & Related papers (2024-06-12T14:56:56Z) - 3D Human Reconstruction in the Wild with Synthetic Data Using Generative Models [52.96248836582542]
We propose an effective approach based on recent diffusion models, termed HumanWild, which can effortlessly generate human images and corresponding 3D mesh annotations.
By exclusively employing generative models, we generate large-scale in-the-wild human images and high-quality annotations, eliminating the need for real-world data collection.
arXiv Detail & Related papers (2024-03-17T06:31:16Z) - LiveHPS: LiDAR-based Scene-level Human Pose and Shape Estimation in Free
Environment [59.320414108383055]
We present LiveHPS, a novel single-LiDAR-based approach for scene-level human pose and shape estimation.
We propose a huge human motion dataset, named FreeMotion, which is collected in various scenarios with diverse human poses.
arXiv Detail & Related papers (2024-02-27T03:08:44Z) - UniG3D: A Unified 3D Object Generation Dataset [75.49544172927749]
UniG3D is a unified 3D object generation dataset constructed by employing a universal data transformation pipeline on ShapeNet datasets.
This pipeline converts each raw 3D model into comprehensive multi-modal data representation.
The selection of data sources for our dataset is based on their scale and quality.
arXiv Detail & Related papers (2023-06-19T07:03:45Z) - Cross-Domain Synthetic-to-Real In-the-Wild Depth and Normal Estimation for 3D Scene Understanding [5.561698802097603]
Cross-domain inference technique learns from synthetic data to estimate depth and normals for in-the-wild omnidirectional 3D scenes.
We introduce UBotNet, an architecture that combines UNet and Bottleneck Transformer elements to predict consistent scene normals and depth.
We validate cross-domain synthetic-to-real depth and normal estimation on real outdoor images using UBotNet trained solely on our synthetic OmniHorizon dataset.
arXiv Detail & Related papers (2022-12-09T18:40:12Z) - TRoVE: Transforming Road Scene Datasets into Photorealistic Virtual
Environments [84.6017003787244]
This work proposes a synthetic data generation pipeline to address the difficulties and domain-gaps present in simulated datasets.
We show that using annotations and visual cues from existing datasets, we can facilitate automated multi-modal data generation.
arXiv Detail & Related papers (2022-08-16T20:46:08Z) - SynLiDAR: Learning From Synthetic LiDAR Sequential Point Cloud for
Semantic Segmentation [37.00112978096702]
SynLiDAR is a synthetic LiDAR point cloud dataset with accurate geometric shapes and comprehensive semantic classes.
PCT-Net is a point cloud translation network that aims to narrow down the gap with real-world point cloud data.
Experiments over multiple data augmentation and semi-supervised semantic segmentation tasks show very positive outcomes.
arXiv Detail & Related papers (2021-07-12T12:51:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.