Generating synthetic photogrammetric data for training deep learning
based 3D point cloud segmentation models
- URL: http://arxiv.org/abs/2008.09647v1
- Date: Fri, 21 Aug 2020 18:50:42 GMT
- Title: Generating synthetic photogrammetric data for training deep learning
based 3D point cloud segmentation models
- Authors: Meida Chen, Andrew Feng, Kyle McCullough, Pratusha Bhuvana Prasad,
Ryan McAlinden, Lucio Soibelman
- Abstract summary: At I/ITSEC 2019, the authors presented a fully-automated workflow to segment 3D photogrammetric point-clouds/meshes and extract object information.
The ultimate goal is to create realistic virtual environments and provide the necessary information for simulation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: At I/ITSEC 2019, the authors presented a fully-automated workflow to segment
3D photogrammetric point-clouds/meshes and extract object information,
including individual tree locations and ground materials (Chen et al., 2019).
The ultimate goal is to create realistic virtual environments and provide the
necessary information for simulation. We tested the generalizability of the
previously proposed framework using a database created under the U.S. Army's
One World Terrain (OWT) project with a variety of landscapes (i.e., various
buildings styles, types of vegetation, and urban density) and different data
qualities (i.e., flight altitudes and overlap between images). Although the
database is considerably larger than existing databases, it remains unknown
whether deep-learning algorithms have truly achieved their full potential in
terms of accuracy, as sizable data sets for training and validation are
currently lacking. Obtaining large annotated 3D point-cloud databases is
time-consuming and labor-intensive, not only from a data annotation perspective
in which the data must be manually labeled by well-trained personnel, but also
from a raw data collection and processing perspective. Furthermore, it is
generally difficult for segmentation models to differentiate objects, such as
buildings and tree masses, and these types of scenarios do not always exist in
the collected data set. Thus, the objective of this study is to investigate
using synthetic photogrammetric data to substitute real-world data in training
deep-learning algorithms. We have investigated methods for generating synthetic
UAV-based photogrammetric data to provide a sufficiently sized database for
training a deep-learning algorithm with the ability to enlarge the data size
for scenarios in which deep-learning models have difficulties.
Related papers
- Enhancing Generalizability of Representation Learning for Data-Efficient 3D Scene Understanding [50.448520056844885]
We propose a generative Bayesian network to produce diverse synthetic scenes with real-world patterns.
A series of experiments robustly display our method's consistent superiority over existing state-of-the-art pre-training approaches.
arXiv Detail & Related papers (2024-06-17T07:43:53Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - AutoSynth: Learning to Generate 3D Training Data for Object Point Cloud
Registration [69.21282992341007]
Auto Synth automatically generates 3D training data for point cloud registration.
We replace the point cloud registration network with a much smaller surrogate network, leading to a $4056.43$ speedup.
Our results on TUD-L, LINEMOD and Occluded-LINEMOD evidence that a neural network trained on our searched dataset yields consistently better performance than the same one trained on the widely used ModelNet40 dataset.
arXiv Detail & Related papers (2023-09-20T09:29:44Z) - Ground material classification and for UAV-based photogrammetric 3D data
A 2D-3D Hybrid Approach [1.3359609092684614]
In recent years, photogrammetry has been widely used in many areas to create 3D virtual data representing the physical environment.
These cutting-edge technologies have caught the US Army and Navy's attention for the purpose of rapid 3D battlefield reconstruction, virtual training, and simulations.
arXiv Detail & Related papers (2021-09-24T22:29:26Z) - REGRAD: A Large-Scale Relational Grasp Dataset for Safe and
Object-Specific Robotic Grasping in Clutter [52.117388513480435]
We present a new dataset named regrad to sustain the modeling of relationships among objects and grasps.
Our dataset is collected in both forms of 2D images and 3D point clouds.
Users are free to import their own object models for the generation of as many data as they want.
arXiv Detail & Related papers (2021-04-29T05:31:21Z) - DeepSatData: Building large scale datasets of satellite images for
training machine learning models [77.17638664503215]
This report presents design considerations for automatically generating satellite imagery datasets for training machine learning models.
We discuss issues faced from the point of view of deep neural network training and evaluation.
arXiv Detail & Related papers (2021-04-28T15:13:12Z) - Towards General Purpose Geometry-Preserving Single-View Depth Estimation [1.9573380763700712]
Single-view depth estimation (SVDE) plays a crucial role in scene understanding for AR applications, 3D modeling, and robotics.
Recent works have shown that a successful solution strongly relies on the diversity and volume of training data.
Our work shows that a model trained on this data along with conventional datasets can gain accuracy while predicting correct scene geometry.
arXiv Detail & Related papers (2020-09-25T20:06:13Z) - Semantic Segmentation and Data Fusion of Microsoft Bing 3D Cities and
Small UAV-based Photogrammetric Data [0.0]
Authors presented a fully automated data segmentation and object information extraction framework for creating simulation terrain using UAV-based photogrammetric data.
Data quality issues in the aircraft-based photogrammetric data are identified.
Authors also proposed a data registration workflow that utilized the traditional iterative closest point (ICP) with the extracted semantic information.
arXiv Detail & Related papers (2020-08-21T18:56:05Z) - Fully Automated Photogrammetric Data Segmentation and Object Information
Extraction Approach for Creating Simulation Terrain [0.0]
This research aims to develop a fully automated photogrammetric data segmentation and object information extraction framework.
Considering the use case of the data in creating realistic virtual environments for training and simulations, segmenting the data and extracting object information are essential tasks.
arXiv Detail & Related papers (2020-08-09T09:32:09Z) - Deep Traffic Sign Detection and Recognition Without Target Domain Real
Images [52.079665469286496]
We propose a novel database generation method that requires no real image from the target-domain, and (ii) templates of the traffic signs.
The method does not aim at overcoming the training with real data, but to be a compatible alternative when the real data is not available.
On large data sets, training with a fully synthetic data set almost matches the performance of training with a real one.
arXiv Detail & Related papers (2020-07-30T21:06:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.