Depth Completion with Twin Surface Extrapolation at Occlusion Boundaries
- URL: http://arxiv.org/abs/2104.02253v2
- Date: Wed, 7 Apr 2021 14:12:49 GMT
- Title: Depth Completion with Twin Surface Extrapolation at Occlusion Boundaries
- Authors: Saif Imran, Xiaoming Liu and Daniel Morris
- Abstract summary: We propose a multi-hypothesis depth representation that explicitly models both foreground and background depths.
Key to our method is the use of an asymmetric loss function that operates on a novel twin-surface representation.
We validate our method on three different datasets.
- Score: 16.773787000535645
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Depth completion starts from a sparse set of known depth values and estimates
the unknown depths for the remaining image pixels. Most methods model this as
depth interpolation and erroneously interpolate depth pixels into the empty
space between spatially distinct objects, resulting in depth-smearing across
occlusion boundaries. Here we propose a multi-hypothesis depth representation
that explicitly models both foreground and background depths in the difficult
occlusion-boundary regions. Our method can be thought of as performing
twin-surface extrapolation, rather than interpolation, in these regions. Next
our method fuses these extrapolated surfaces into a single depth image
leveraging the image data. Key to our method is the use of an asymmetric loss
function that operates on a novel twin-surface representation. This enables us
to train a network to simultaneously do surface extrapolation and surface
fusion. We characterize our loss function and compare with other common losses.
Finally, we validate our method on three different datasets; KITTI, an outdoor
real-world dataset, NYU2, indoor real-world depth dataset and Virtual KITTI, a
photo-realistic synthetic dataset with dense groundtruth, and demonstrate
improvement over the state of the art.
Related papers
- Edge-preserving Near-light Photometric Stereo with Neural Surfaces [76.50065919656575]
We introduce an analytically differentiable neural surface in near-light photometric stereo for avoiding differentiation errors at sharp depth edges.
Experiments on both synthetic and real-world scenes demonstrate the effectiveness of our method for detailed shape recovery with edge preservation.
arXiv Detail & Related papers (2022-07-11T04:51:43Z) - Edge-aware Bidirectional Diffusion for Dense Depth Estimation from Light
Fields [31.941861222005603]
We present an algorithm to estimate fast and accurate depth maps from light fields via a sparse set of depth edges and gradients.
Our proposed approach is based around the idea that true depth edges are more sensitive than texture edges to local constraints.
arXiv Detail & Related papers (2021-07-07T01:26:25Z) - Self-Guided Instance-Aware Network for Depth Completion and Enhancement [6.319531161477912]
Existing methods directly interpolate the missing depth measurements based on pixel-wise image content and the corresponding neighboring depth values.
We propose a novel self-guided instance-aware network (SG-IANet) that utilize self-guided mechanism to extract instance-level features that is needed for depth restoration.
arXiv Detail & Related papers (2021-05-25T19:41:38Z) - Efficient Depth Completion Using Learned Bases [94.0808155168311]
We propose a new global geometry constraint for depth completion.
By assuming depth maps often lay on low dimensional subspaces, a dense depth map can be approximated by a weighted sum of full-resolution principal depth bases.
arXiv Detail & Related papers (2020-12-02T11:57:37Z) - Dual Pixel Exploration: Simultaneous Depth Estimation and Image
Restoration [77.1056200937214]
We study the formation of the DP pair which links the blur and the depth information.
We propose an end-to-end DDDNet (DP-based Depth and De Network) to jointly estimate the depth and restore the image.
arXiv Detail & Related papers (2020-12-01T06:53:57Z) - NeuralFusion: Online Depth Fusion in Latent Space [77.59420353185355]
We present a novel online depth map fusion approach that learns depth map aggregation in a latent feature space.
Our approach is real-time capable, handles high noise levels, and is particularly able to deal with gross outliers common for photometric stereo-based depth maps.
arXiv Detail & Related papers (2020-11-30T13:50:59Z) - View-consistent 4D Light Field Depth Estimation [37.04038603184669]
We propose a method to compute depth maps for every sub-aperture image in a light field in a view consistent way.
Our method precisely defines depth edges via EPIs, then we diffuse these edges spatially within the central view.
arXiv Detail & Related papers (2020-09-09T01:47:34Z) - Pixel-Pair Occlusion Relationship Map(P2ORM): Formulation, Inference &
Application [20.63938300312815]
We formalize concepts around geometric occlusion in 2D images (i.e., ignoring semantics)
We propose a novel unified formulation of both occlusion boundaries and occlusion orientations via a pixel-pair occlusion relation.
Experiments on a variety of datasets demonstrate that our method outperforms existing ones on this task.
We also propose a new depth map refinement method that consistently improve the performance of state-of-the-art monocular depth estimation methods.
arXiv Detail & Related papers (2020-07-23T15:52:09Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z) - Depth Edge Guided CNNs for Sparse Depth Upsampling [18.659087667114274]
Guided sparse depth upsampling aims to upsample an irregularly sampled sparse depth map when an aligned high-resolution color image is given as guidance.
We propose a guided convolutional layer to recover dense depth from sparse and irregular depth image with an depth edge image as guidance.
We conduct comprehensive experiments to verify our method on real-world indoor and synthetic outdoor datasets.
arXiv Detail & Related papers (2020-03-23T08:56:32Z) - Single Image Depth Estimation Trained via Depth from Defocus Cues [105.67073923825842]
Estimating depth from a single RGB image is a fundamental task in computer vision.
In this work, we rely, instead of different views, on depth from focus cues.
We present results that are on par with supervised methods on KITTI and Make3D datasets and outperform unsupervised learning approaches.
arXiv Detail & Related papers (2020-01-14T20:22:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.