Fully Automated Photogrammetric Data Segmentation and Object Information
Extraction Approach for Creating Simulation Terrain
- URL: http://arxiv.org/abs/2008.03697v1
- Date: Sun, 9 Aug 2020 09:32:09 GMT
- Title: Fully Automated Photogrammetric Data Segmentation and Object Information
Extraction Approach for Creating Simulation Terrain
- Authors: Meida Chen, Andrew Feng, Kyle McCullough, Pratusha Bhuvana Prasad,
Ryan McAlinden, Lucio Soibelman, Mike Enloe
- Abstract summary: This research aims to develop a fully automated photogrammetric data segmentation and object information extraction framework.
Considering the use case of the data in creating realistic virtual environments for training and simulations, segmenting the data and extracting object information are essential tasks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Our previous works have demonstrated that visually realistic 3D meshes can be
automatically reconstructed with low-cost, off-the-shelf unmanned aerial
systems (UAS) equipped with capable cameras, and efficient photogrammetric
software techniques. However, such generated data do not contain semantic
information/features of objects (i.e., man-made objects, vegetation, ground,
object materials, etc.) and cannot allow the sophisticated user-level and
system-level interaction. Considering the use case of the data in creating
realistic virtual environments for training and simulations (i.e., mission
planning, rehearsal, threat detection, etc.), segmenting the data and
extracting object information are essential tasks. Thus, the objective of this
research is to design and develop a fully automated photogrammetric data
segmentation and object information extraction framework. To validate the
proposed framework, the segmented data and extracted features were used to
create virtual environments in the authors previously designed simulation tool
i.e., Aerial Terrain Line of Sight Analysis System (ATLAS). The results showed
that 3D mesh trees could be replaced with geo-typical 3D tree models using the
extracted individual tree locations. The extracted tree features (i.e., color,
width, height) are valuable for selecting the appropriate tree species and
enhance visual quality. Furthermore, the identified ground material information
can be taken into consideration for pathfinding. The shortest path can be
computed not only considering the physical distance, but also considering the
off-road vehicle performance capabilities on different ground surface
materials.
Related papers
- Enhancing Generalizability of Representation Learning for Data-Efficient 3D Scene Understanding [50.448520056844885]
We propose a generative Bayesian network to produce diverse synthetic scenes with real-world patterns.
A series of experiments robustly display our method's consistent superiority over existing state-of-the-art pre-training approaches.
arXiv Detail & Related papers (2024-06-17T07:43:53Z) - Generalizing Single-View 3D Shape Retrieval to Occlusions and Unseen
Objects [32.32128461720876]
Single-view 3D shape retrieval is a challenging task that is increasingly important with the growth of available 3D data.
We systematically evaluate single-view 3D shape retrieval along three different axes: the presence of object occlusions and truncations, generalization to unseen 3D shape data, and generalization to unseen objects in the input images.
arXiv Detail & Related papers (2023-12-31T05:39:38Z) - Multi-Modal Dataset Acquisition for Photometrically Challenging Object [56.30027922063559]
This paper addresses the limitations of current datasets for 3D vision tasks in terms of accuracy, size, realism, and suitable imaging modalities for photometrically challenging objects.
We propose a novel annotation and acquisition pipeline that enhances existing 3D perception and 6D object pose datasets.
arXiv Detail & Related papers (2023-08-21T10:38:32Z) - Towards Multimodal Multitask Scene Understanding Models for Indoor
Mobile Agents [49.904531485843464]
In this paper, we discuss the main challenge: insufficient, or even no, labeled data for real-world indoor environments.
We describe MMISM (Multi-modality input Multi-task output Indoor Scene understanding Model) to tackle the above challenges.
MMISM considers RGB images as well as sparse Lidar points as inputs and 3D object detection, depth completion, human pose estimation, and semantic segmentation as output tasks.
We show that MMISM performs on par or even better than single-task models.
arXiv Detail & Related papers (2022-09-27T04:49:19Z) - Ground material classification and for UAV-based photogrammetric 3D data
A 2D-3D Hybrid Approach [1.3359609092684614]
In recent years, photogrammetry has been widely used in many areas to create 3D virtual data representing the physical environment.
These cutting-edge technologies have caught the US Army and Navy's attention for the purpose of rapid 3D battlefield reconstruction, virtual training, and simulations.
arXiv Detail & Related papers (2021-09-24T22:29:26Z) - RandomRooms: Unsupervised Pre-training from Synthetic Shapes and
Randomized Layouts for 3D Object Detection [138.2892824662943]
A promising solution is to make better use of the synthetic dataset, which consists of CAD object models, to boost the learning on real datasets.
Recent work on 3D pre-training exhibits failure when transfer features learned on synthetic objects to other real-world applications.
In this work, we put forward a new method called RandomRooms to accomplish this objective.
arXiv Detail & Related papers (2021-08-17T17:56:12Z) - Unsupervised Learning of 3D Object Categories from Videos in the Wild [75.09720013151247]
We focus on learning a model from multiple views of a large collection of object instances.
We propose a new neural network design, called warp-conditioned ray embedding (WCR), which significantly improves reconstruction.
Our evaluation demonstrates performance improvements over several deep monocular reconstruction baselines on existing benchmarks.
arXiv Detail & Related papers (2021-03-30T17:57:01Z) - Generating synthetic photogrammetric data for training deep learning
based 3D point cloud segmentation models [0.0]
At I/ITSEC 2019, the authors presented a fully-automated workflow to segment 3D photogrammetric point-clouds/meshes and extract object information.
The ultimate goal is to create realistic virtual environments and provide the necessary information for simulation.
arXiv Detail & Related papers (2020-08-21T18:50:42Z) - Detection and Segmentation of Custom Objects using High Distraction
Photorealistic Synthetic Data [0.5076419064097732]
We show a straightforward and useful methodology for performing instance segmentation using synthetic data.
The goal is to achieve high performance on manually-gathered and annotated real-world data of custom objects.
This white-paper provides strong evidence that photorealistic simulated data can be used in practical real world applications.
arXiv Detail & Related papers (2020-07-28T16:33:42Z) - Single View Metrology in the Wild [94.7005246862618]
We present a novel approach to single view metrology that can recover the absolute scale of a scene represented by 3D heights of objects or camera height above the ground.
Our method relies on data-driven priors learned by a deep network specifically designed to imbibe weakly supervised constraints from the interplay of the unknown camera with 3D entities such as object heights.
We demonstrate state-of-the-art qualitative and quantitative results on several datasets as well as applications including virtual object insertion.
arXiv Detail & Related papers (2020-07-18T22:31:33Z) - Exploring the Capabilities and Limits of 3D Monocular Object Detection
-- A Study on Simulation and Real World Data [0.0]
3D object detection based on monocular camera data is key enabler for autonomous driving.
Recent deep learning methods show promising results to recover depth information from single images.
In this paper, we evaluate the performance of a 3D object detection pipeline which is parameterizable with different depth estimation configurations.
arXiv Detail & Related papers (2020-05-15T09:05:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.