Structure-Aware Completion of Photogrammetric Meshes in Urban Road
Environment
- URL: http://arxiv.org/abs/2011.11210v3
- Date: Wed, 10 Feb 2021 03:45:29 GMT
- Title: Structure-Aware Completion of Photogrammetric Meshes in Urban Road
Environment
- Authors: Qing Zhu and Qisen Shang and Han Hu and Haojia Yu and Ruofei Zhong
- Abstract summary: The paper proposes a structure-aware completion approach to improve the quality of meshes by removing undesired vehicles on the road seamlessly.
It should be noted that the proposed method is also capable to handle tiled mesh models for large-scale scenes.
- Score: 13.7725733099315
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Photogrammetric mesh models obtained from aerial oblique images have been
widely used for urban reconstruction. However, the photogrammetric meshes also
suffer from severe texture problems, especially on the road areas due to
occlusion. This paper proposes a structure-aware completion approach to improve
the quality of meshes by removing undesired vehicles on the road seamlessly.
Specifically, the discontinuous texture atlas is first integrated to a
continuous screen space through rendering by the graphics pipeline; the
rendering also records necessary mapping for deintegration to the original
texture atlas after editing. Vehicle regions are masked by a standard object
detection approach, e.g. Faster RCNN. Then, the masked regions are completed
guided by the linear structures and regularities in the road region, which is
implemented based on Patch Match. Finally, the completed rendered image is
deintegrated to the original texture atlas and the triangles for the vehicles
are also flattened for improved meshes. Experimental evaluations and analyses
are conducted against three datasets, which are captured with different sensors
and ground sample distances. The results reveal that the proposed method can
quite realistic meshes after removing the vehicles. The structure-aware
completion approach for road regions outperforms popular image completion
methods and ablation study further confirms the effectiveness of the linear
guidance. It should be noted that the proposed method is also capable to handle
tiled mesh models for large-scale scenes. Dataset and code are available at
vrlab.org.cn/~hanhu/projects/mesh.
Related papers
- Shape Your Ground: Refining Road Surfaces Beyond Planar Representations [35.63881467885378]
Road surface reconstruction from aerial images is fundamental for autonomous driving, urban planning, and virtual simulation.
Existing reconstruction methods often produce artifacts and inconsistencies that limit usability.
We introduce FlexRoad, the first framework to address road surface smoothing by fitting Non-Uniform Rational B-Splines (NURBS) surfaces to 3D road points obtained from photogrammetric reconstructions or geodata providers.
arXiv Detail & Related papers (2025-04-15T21:20:44Z) - Decompositional Neural Scene Reconstruction with Generative Diffusion Prior [64.71091831762214]
Decompositional reconstruction of 3D scenes, with complete shapes and detailed texture, is intriguing for downstream applications.
Recent approaches incorporate semantic or geometric regularization to address this issue, but they suffer significant degradation in underconstrained areas.
We propose DP-Recon, which employs diffusion priors in the form of Score Distillation Sampling (SDS) to optimize the neural representation of each individual object under novel views.
arXiv Detail & Related papers (2025-03-19T02:11:31Z) - FlexDrive: Toward Trajectory Flexibility in Driving Scene Reconstruction and Rendering [79.39246982782717]
We introduce an Inverse View Warping technique to create compact and high-quality images as supervision for the reconstruction of the out-of-path views.
Our method achieves superior in-path and out-of-path reconstruction and rendering performance on the widely used Open dataset.
arXiv Detail & Related papers (2025-02-28T14:32:04Z) - Neural Semantic Map-Learning for Autonomous Vehicles [85.8425492858912]
We present a mapping system that fuses local submaps gathered from a fleet of vehicles at a central instance to produce a coherent map of the road environment.
Our method jointly aligns and merges the noisy and incomplete local submaps using a scene-specific Neural Signed Distance Field.
We leverage memory-efficient sparse feature-grids to scale to large areas and introduce a confidence score to model uncertainty in scene reconstruction.
arXiv Detail & Related papers (2024-10-10T10:10:03Z) - CarPatch: A Synthetic Benchmark for Radiance Field Evaluation on Vehicle
Components [77.33782775860028]
We introduce CarPatch, a novel synthetic benchmark of vehicles.
In addition to a set of images annotated with their intrinsic and extrinsic camera parameters, the corresponding depth maps and semantic segmentation masks have been generated for each view.
Global and part-based metrics have been defined and used to evaluate, compare, and better characterize some state-of-the-art techniques.
arXiv Detail & Related papers (2023-07-24T11:59:07Z) - TMO: Textured Mesh Acquisition of Objects with a Mobile Device by using
Differentiable Rendering [54.35405028643051]
We present a new pipeline for acquiring a textured mesh in the wild with a single smartphone.
Our method first introduces an RGBD-aided structure from motion, which can yield filtered depth maps.
We adopt the neural implicit surface reconstruction method, which allows for high-quality mesh.
arXiv Detail & Related papers (2023-03-27T10:07:52Z) - Delicate Textured Mesh Recovery from NeRF via Adaptive Surface
Refinement [78.48648360358193]
We present a novel framework that generates textured surface meshes from images.
Our approach begins by efficiently initializing the geometry and view-dependency appearance with a NeRF.
We jointly refine the appearance with geometry and bake it into texture images for real-time rendering.
arXiv Detail & Related papers (2023-03-03T17:14:44Z) - Satellite Image Based Cross-view Localization for Autonomous Vehicle [59.72040418584396]
This paper shows that by using an off-the-shelf high-definition satellite image as a ready-to-use map, we are able to achieve cross-view vehicle localization up to a satisfactory accuracy.
Our method is validated on KITTI and Ford Multi-AV Seasonal datasets as ground view and Google Maps as the satellite view.
arXiv Detail & Related papers (2022-07-27T13:16:39Z) - Large-Scale 3D Semantic Reconstruction for Automated Driving Vehicles
with Adaptive Truncated Signed Distance Function [9.414880946870916]
We propose a novel 3D reconstruction and semantic mapping system using LiDAR and camera sensors.
An Adaptive Truncated Function is introduced to describe surfaces implicitly, which can deal with different LiDAR point sparsities.
An optimal image patch selection strategy is proposed to estimate the optimal semantic class for each triangle mesh.
arXiv Detail & Related papers (2022-02-28T15:11:25Z) - Projective Urban Texturing [8.349665441428925]
We propose a method for automatic generation of textures for 3D city meshes in immersive urban environments.
Projective Urban Texturing (PUT) re-targets textural style from real-world panoramic images to unseen urban meshes.
PUT relies on contrastive and adversarial training of a neural architecture designed for unpaired image-to-texture translation.
arXiv Detail & Related papers (2022-01-25T14:56:52Z) - Mesh Reconstruction from Aerial Images for Outdoor Terrain Mapping Using
Joint 2D-3D Learning [12.741811850885309]
This paper addresses outdoor terrain mapping using overhead images obtained from an unmanned aerial vehicle.
Dense depth estimation from aerial images during flight is challenging.
We develop a joint 2D-3D learning approach to reconstruct local meshes at each camera, which can be assembled into a global environment model.
arXiv Detail & Related papers (2021-01-06T02:09:03Z) - Vehicle Reconstruction and Texture Estimation Using Deep Implicit
Semantic Template Mapping [32.580904361799966]
We introduce VERTEX, an effective solution to recover 3D shape and intrinsic texture of vehicles from uncalibrated monocular input.
By fusing the global and local features together, our approach is capable to generate consistent and detailed texture in both visible and invisible areas.
arXiv Detail & Related papers (2020-11-30T09:27:10Z) - SCFusion: Real-time Incremental Scene Reconstruction with Semantic
Completion [86.77318031029404]
We propose a framework that performs scene reconstruction and semantic scene completion jointly in an incremental and real-time manner.
Our framework relies on a novel neural architecture designed to process occupancy maps and leverages voxel states to accurately and efficiently fuse semantic completion with the 3D global model.
arXiv Detail & Related papers (2020-10-26T15:31:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.