A Method of Generating Measurable Panoramic Image for Indoor Mobile
Measurement System
- URL: http://arxiv.org/abs/2010.14270v1
- Date: Tue, 27 Oct 2020 13:12:02 GMT
- Title: A Method of Generating Measurable Panoramic Image for Indoor Mobile
Measurement System
- Authors: Hao Ma, Jingbin Liu, Zhirong Hu, Hongyu Qiu, Dong Xu, Zemin Wang,
Xiaodong Gong, Sheng Yang
- Abstract summary: This paper designs a technique route to generate high-quality panoramic image with depth information.
For the fusion of 3D points and image data, we adopt a parameter self-adaptive framework to produce 2D dense depth map.
For image stitching, optimal seamline for the overlapping area is searched using a graph-cuts-based method.
- Score: 36.47697710426005
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper designs a technique route to generate high-quality panoramic image
with depth information, which involves two critical research hotspots: fusion
of LiDAR and image data and image stitching. For the fusion of 3D points and
image data, since a sparse depth map can be firstly generated by projecting
LiDAR point onto the RGB image plane based on our reliable calibrated and
synchronized sensors, we adopt a parameter self-adaptive framework to produce
2D dense depth map. For image stitching, optimal seamline for the overlapping
area is searched using a graph-cuts-based method to alleviate the geometric
influence and image blending based on the pyramid multi-band is utilized to
eliminate the photometric effects near the stitching line. Since each pixel is
associated with a depth value, we design this depth value as a radius in the
spherical projection which can further project the panoramic image to the world
coordinate and consequently produces a high-quality measurable panoramic image.
The purposed method is tested on the data from our data collection platform and
presents a satisfactory application prospects.
Related papers
- Refinement of Monocular Depth Maps via Multi-View Differentiable Rendering [4.717325308876748]
We present a novel approach to generate view consistent and detailed depth maps from a number of posed images.
We leverage advances in monocular depth estimation, which generate topologically complete, but metrically inaccurate depth maps.
Our method is able to generate dense, detailed, high-quality depth maps, also in challenging indoor scenarios, and outperforms state-of-the-art depth reconstruction approaches.
arXiv Detail & Related papers (2024-10-04T18:50:28Z) - TerrainMesh: Metric-Semantic Terrain Reconstruction from Aerial Images
Using Joint 2D-3D Learning [20.81202315793742]
This paper develops a joint 2D-3D learning approach to reconstruct a local metric-semantic mesh at each camera maintained by a visual odometry algorithm.
The mesh can be assembled into a global environment model to capture the terrain topology and semantics during online operation.
arXiv Detail & Related papers (2022-04-23T05:18:39Z) - Depth-SIMS: Semi-Parametric Image and Depth Synthesis [23.700034054124604]
We present a method that generates RGB canvases with well aligned segmentation maps and sparse depth maps, coupled with an in-painting network that transforms the RGB canvases into high quality RGB images.
We benchmark our method in terms of structural alignment and image quality, showing an increase in mIoU over SOTA by 3.7 percentage points and a highly competitive FID.
We analyse the quality of the generated data as training data for semantic segmentation and depth completion, and show that our approach is more suited for this purpose than other methods.
arXiv Detail & Related papers (2022-03-07T13:58:32Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo [103.08512487830669]
We present a modern solution to the multi-view photometric stereo problem (MVPS)
We procure the surface orientation using a photometric stereo (PS) image formation model and blend it with a multi-view neural radiance field representation to recover the object's surface geometry.
Our method performs neural rendering of multi-view images while utilizing surface normals estimated by a deep photometric stereo network.
arXiv Detail & Related papers (2021-10-11T20:20:03Z) - Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo [61.6260594326246]
We introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously.
Using separable 4D convolutions and 2D heat-maps reduces the size and makes more efficient.
arXiv Detail & Related papers (2021-03-22T18:06:58Z) - Mesh Reconstruction from Aerial Images for Outdoor Terrain Mapping Using
Joint 2D-3D Learning [12.741811850885309]
This paper addresses outdoor terrain mapping using overhead images obtained from an unmanned aerial vehicle.
Dense depth estimation from aerial images during flight is challenging.
We develop a joint 2D-3D learning approach to reconstruct local meshes at each camera, which can be assembled into a global environment model.
arXiv Detail & Related papers (2021-01-06T02:09:03Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z) - Multi-View Photometric Stereo: A Robust Solution and Benchmark Dataset
for Spatially Varying Isotropic Materials [65.95928593628128]
We present a method to capture both 3D shape and spatially varying reflectance with a multi-view photometric stereo technique.
Our algorithm is suitable for perspective cameras and nearby point light sources.
arXiv Detail & Related papers (2020-01-18T12:26:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.