Semantic Segmentation and Data Fusion of Microsoft Bing 3D Cities and
Small UAV-based Photogrammetric Data
- URL: http://arxiv.org/abs/2008.09648v1
- Date: Fri, 21 Aug 2020 18:56:05 GMT
- Title: Semantic Segmentation and Data Fusion of Microsoft Bing 3D Cities and
Small UAV-based Photogrammetric Data
- Authors: Meida Chen, Andrew Feng, Kyle McCullough, Pratusha Bhuvana Prasad,
Ryan McAlinden, Lucio Soibelman
- Abstract summary: Authors presented a fully automated data segmentation and object information extraction framework for creating simulation terrain using UAV-based photogrammetric data.
Data quality issues in the aircraft-based photogrammetric data are identified.
Authors also proposed a data registration workflow that utilized the traditional iterative closest point (ICP) with the extracted semantic information.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With state-of-the-art sensing and photogrammetric techniques, Microsoft Bing
Maps team has created over 125 highly detailed 3D cities from 11 different
countries that cover hundreds of thousands of square kilometer areas. The 3D
city models were created using the photogrammetric technique with
high-resolution images that were captured from aircraft-mounted cameras. Such a
large 3D city database has caught the attention of the US Army for creating
virtual simulation environments to support military operations. However, the 3D
city models do not have semantic information such as buildings, vegetation, and
ground and cannot allow sophisticated user-level and system-level interaction.
At I/ITSEC 2019, the authors presented a fully automated data segmentation and
object information extraction framework for creating simulation terrain using
UAV-based photogrammetric data. This paper discusses the next steps in
extending our designed data segmentation framework for segmenting 3D city data.
In this study, the authors first investigated the strengths and limitations of
the existing framework when applied to the Bing data. The main differences
between UAV-based and aircraft-based photogrammetric data are highlighted. The
data quality issues in the aircraft-based photogrammetric data, which can
negatively affect the segmentation performance, are identified. Based on the
findings, a workflow was designed specifically for segmenting Bing data while
considering its characteristics. In addition, since the ultimate goal is to
combine the use of both small unmanned aerial vehicle (UAV) collected data and
the Bing data in a virtual simulation environment, data from these two sources
needed to be aligned and registered together. To this end, the authors also
proposed a data registration workflow that utilized the traditional iterative
closest point (ICP) with the extracted semantic information.
Related papers
- UniG3D: A Unified 3D Object Generation Dataset [75.49544172927749]
UniG3D is a unified 3D object generation dataset constructed by employing a universal data transformation pipeline on ShapeNet datasets.
This pipeline converts each raw 3D model into comprehensive multi-modal data representation.
The selection of data sources for our dataset is based on their scale and quality.
arXiv Detail & Related papers (2023-06-19T07:03:45Z) - Argoverse 2: Next Generation Datasets for Self-Driving Perception and
Forecasting [64.7364925689825]
Argoverse 2 (AV2) is a collection of three datasets for perception and forecasting research in the self-driving domain.
The Lidar dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose.
The Motion Forecasting dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene.
arXiv Detail & Related papers (2023-01-02T00:36:22Z) - Simple and Effective Synthesis of Indoor 3D Scenes [78.95697556834536]
We study the problem of immersive 3D indoor scenes from one or more images.
Our aim is to generate high-resolution images and videos from novel viewpoints.
We propose an image-to-image GAN that maps directly from reprojections of incomplete point clouds to full high-resolution RGB-D images.
arXiv Detail & Related papers (2022-04-06T17:54:46Z) - SensatUrban: Learning Semantics from Urban-Scale Photogrammetric Point
Clouds [52.624157840253204]
We introduce SensatUrban, an urban-scale UAV photogrammetry point cloud dataset consisting of nearly three billion points collected from three UK cities, covering 7.6 km2.
Each point in the dataset has been labelled with fine-grained semantic annotations, resulting in a dataset that is three times the size of the previous existing largest photogrammetric point cloud dataset.
arXiv Detail & Related papers (2022-01-12T14:48:11Z) - VPFNet: Improving 3D Object Detection with Virtual Point based LiDAR and
Stereo Data Fusion [62.24001258298076]
VPFNet is a new architecture that cleverly aligns and aggregates the point cloud and image data at the virtual' points.
Our VPFNet achieves 83.21% moderate 3D AP and 91.86% moderate BEV AP on the KITTI test set, ranking the 1st since May 21th, 2021.
arXiv Detail & Related papers (2021-11-29T08:51:20Z) - Ground material classification and for UAV-based photogrammetric 3D data
A 2D-3D Hybrid Approach [1.3359609092684614]
In recent years, photogrammetry has been widely used in many areas to create 3D virtual data representing the physical environment.
These cutting-edge technologies have caught the US Army and Navy's attention for the purpose of rapid 3D battlefield reconstruction, virtual training, and simulations.
arXiv Detail & Related papers (2021-09-24T22:29:26Z) - H3D: Benchmark on Semantic Segmentation of High-Resolution 3D Point
Clouds and textured Meshes from UAV LiDAR and Multi-View-Stereo [4.263987603222371]
This paper introduces a 3D dataset which is unique in three ways.
It depicts the village of Hessigheim (Germany) henceforth referred to as H3D.
It is designed for promoting research in the field of 3D data analysis on one hand and to evaluate and rank emerging approaches.
arXiv Detail & Related papers (2021-02-10T09:33:48Z) - Semantic Segmentation on Swiss3DCities: A Benchmark Study on Aerial
Photogrammetric 3D Pointcloud Dataset [67.44497676652173]
We introduce a new outdoor urban 3D pointcloud dataset, covering a total area of 2.7 $km2$, sampled from three Swiss cities.
The dataset is manually annotated for semantic segmentation with per-point labels, and is built using photogrammetry from images acquired by multirotors equipped with high-resolution cameras.
arXiv Detail & Related papers (2020-12-23T21:48:47Z) - Generating synthetic photogrammetric data for training deep learning
based 3D point cloud segmentation models [0.0]
At I/ITSEC 2019, the authors presented a fully-automated workflow to segment 3D photogrammetric point-clouds/meshes and extract object information.
The ultimate goal is to create realistic virtual environments and provide the necessary information for simulation.
arXiv Detail & Related papers (2020-08-21T18:50:42Z) - A Nearest Neighbor Network to Extract Digital Terrain Models from 3D
Point Clouds [1.6249267147413524]
We present an algorithm that operates on 3D-point clouds and estimates the underlying DTM for the scene using an end-to-end approach.
Our model learns neighborhood information and seamlessly integrates this with point-wise and block-wise global features.
arXiv Detail & Related papers (2020-05-21T15:54:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.