Polarimetric Monocular Dense Mapping Using Relative Deep Depth Prior
- URL: http://arxiv.org/abs/2102.05212v1
- Date: Wed, 10 Feb 2021 01:34:37 GMT
- Title: Polarimetric Monocular Dense Mapping Using Relative Deep Depth Prior
- Authors: Moein Shakeri, Shing Yan Loo, Hong Zhang
- Abstract summary: We propose an online reconstruction method that uses full polarimetric cues available from the polarization camera.
Our method is able to significantly improve the accuracy of the depthmap as well as increase its density, specially in regions of poor texture.
- Score: 8.552832023331248
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper is concerned with polarimetric dense map reconstruction based on a
polarization camera with the help of relative depth information as a prior. In
general, polarization imaging is able to reveal information about surface
normal such as azimuth and zenith angles, which can support the development of
solutions to the problem of dense reconstruction, especially in texture-poor
regions. However, polarimetric shape cues are ambiguous due to two types of
polarized reflection (specular/diffuse). Although methods have been proposed to
address this issue, they either are offline and therefore not practical in
robotics applications, or use incomplete polarimetric cues, leading to
sub-optimal performance. In this paper, we propose an online reconstruction
method that uses full polarimetric cues available from the polarization camera.
With our online method, we can propagate sparse depth values both along and
perpendicular to iso-depth contours. Through comprehensive experiments on
challenging image sequences, we demonstrate that our method is able to
significantly improve the accuracy of the depthmap as well as increase its
density, specially in regions of poor texture.
Related papers
- NeRSP: Neural 3D Reconstruction for Reflective Objects with Sparse Polarized Images [62.752710734332894]
NeRSP is a Neural 3D reconstruction technique for Reflective surfaces with Sparse Polarized images.
We derive photometric and geometric cues from the polarimetric image formation model and multiview azimuth consistency.
We achieve the state-of-the-art surface reconstruction results with only 6 views as input.
arXiv Detail & Related papers (2024-06-11T09:53:18Z) - Robust Depth Enhancement via Polarization Prompt Fusion Tuning [112.88371907047396]
We present a framework that leverages polarization imaging to improve inaccurate depth measurements from various depth sensors.
Our method first adopts a learning-based strategy where a neural network is trained to estimate a dense and complete depth map from polarization data and a sensor depth map from different sensors.
To further improve the performance, we propose a Polarization Prompt Fusion Tuning (PPFT) strategy to effectively utilize RGB-based models pre-trained on large-scale datasets.
arXiv Detail & Related papers (2024-04-05T17:55:33Z) - Parallax-Tolerant Unsupervised Deep Image Stitching [57.76737888499145]
We propose UDIS++, a parallax-tolerant unsupervised deep image stitching technique.
First, we propose a robust and flexible warp to model the image registration from global homography to local thin-plate spline motion.
To further eliminate the parallax artifacts, we propose to composite the stitched image seamlessly by unsupervised learning for seam-driven composition masks.
arXiv Detail & Related papers (2023-02-16T10:40:55Z) - Polarimetric Multi-View Inverse Rendering [13.391866136230165]
A polarization camera has great potential for 3D reconstruction since the angle of polarization (AoP) and the degree of polarization (DoP) of reflected light are related to an object's surface normal.
We propose a novel 3D reconstruction method called Polarimetric Multi-View Inverse Rendering (Polarimetric MVIR) that effectively exploits geometric, photometric, and polarimetric cues extracted from input multi-view color-polarization images.
arXiv Detail & Related papers (2022-12-24T12:12:12Z) - Polarimetric Inverse Rendering for Transparent Shapes Reconstruction [1.807492010338763]
We propose a novel method for the detailed reconstruction of transparent objects by exploiting polarimetric cues.
We implicitly represent the object's geometry as a neural network, while the polarization render is capable of rendering the object's polarization images.
We build a polarization dataset for multi-view transparent shapes reconstruction to verify our method.
arXiv Detail & Related papers (2022-08-25T02:52:31Z) - Monocular Depth Parameterizing Networks [15.791732557395552]
We propose a network structure that provides a parameterization of a set of depth maps with feasible shapes.
This allows us to search the shapes for a photo consistent solution with respect to other images.
Our experimental evaluation shows that our method generates more accurate depth maps and generalizes better than competing state-of-the-art approaches.
arXiv Detail & Related papers (2020-12-21T13:02:41Z) - Uncalibrated Neural Inverse Rendering for Photometric Stereo of General
Surfaces [103.08512487830669]
This paper presents an uncalibrated deep neural network framework for the photometric stereo problem.
Existing neural network-based methods either require exact light directions or ground-truth surface normals of the object or both.
We propose an uncalibrated neural inverse rendering approach to this problem.
arXiv Detail & Related papers (2020-12-12T10:33:08Z) - Deep Photometric Stereo for Non-Lambertian Surfaces [89.05501463107673]
We introduce a fully convolutional deep network for calibrated photometric stereo, which we call PS-FCN.
PS-FCN learns the mapping from reflectance observations to surface normal, and is able to handle surfaces with general and unknown isotropic reflectance.
To deal with the uncalibrated scenario where light directions are unknown, we introduce a new convolutional network, named LCNet, to estimate light directions from input images.
arXiv Detail & Related papers (2020-07-26T15:20:53Z) - Polarimetric Multi-View Inverse Rendering [13.391866136230165]
A polarization camera has great potential for 3D reconstruction since the angle of polarization (AoP) of reflected light is related to an object's surface normal.
We propose a novel 3D reconstruction method called Polarimetric Multi-View Inverse Rendering (Polarimetric MVIR) that exploits geometric, photometric, and polarimetric cues extracted from input multi-view color polarization images.
arXiv Detail & Related papers (2020-07-17T09:00:20Z) - P2D: a self-supervised method for depth estimation from polarimetry [0.7046417074932255]
We propose exploiting polarization cues to encourage accurate reconstruction of scenes.
Our method is evaluated both qualitatively and quantitatively demonstrating that the contribution of this new information as well as an enhanced loss function improves depth estimation results.
arXiv Detail & Related papers (2020-07-15T09:32:53Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.