Deterministic Guided LiDAR Depth Map Completion
- URL: http://arxiv.org/abs/2106.07256v1
- Date: Mon, 14 Jun 2021 09:19:47 GMT
- Title: Deterministic Guided LiDAR Depth Map Completion
- Authors: Bryan Krauss, Gregory Schroeder, Marko Gustke, Ahmed Hussein
- Abstract summary: This paper presents a non-deep learning-based approach to densify a sparse LiDAR-based depth map using a guidance RGB image.
The evaluation of this work is executed using the KITTI depth completion benchmark, which validates the proposed work.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate dense depth estimation is crucial for autonomous vehicles to analyze
their environment. This paper presents a non-deep learning-based approach to
densify a sparse LiDAR-based depth map using a guidance RGB image. To achieve
this goal the RGB image is at first cleared from most of the camera-LiDAR
misalignment artifacts. Afterward, it is over segmented and a plane for each
superpixel is approximated. In the case a superpixel is not well represented by
a plane, a plane is approximated for a convex hull of the most inlier. Finally,
the pinhole camera model is used for the interpolation process and the
remaining areas are interpolated. The evaluation of this work is executed using
the KITTI depth completion benchmark, which validates the proposed work and
shows that it outperforms the state-of-the-art non-deep learning-based methods,
in addition to several deep learning-based methods.
Related papers
- RGB Guided ToF Imaging System: A Survey of Deep Learning-based Methods [30.34690112905212]
Integrating an RGB camera into a ToF imaging system has become a significant technique for perceiving the real world.
This paper comprehensively reviews the works related to RGB guided ToF imaging, including network structures, learning strategies, evaluation metrics, benchmark datasets, and objective functions.
arXiv Detail & Related papers (2024-05-16T17:59:58Z) - Robust Depth Enhancement via Polarization Prompt Fusion Tuning [112.88371907047396]
We present a framework that leverages polarization imaging to improve inaccurate depth measurements from various depth sensors.
Our method first adopts a learning-based strategy where a neural network is trained to estimate a dense and complete depth map from polarization data and a sensor depth map from different sensors.
To further improve the performance, we propose a Polarization Prompt Fusion Tuning (PPFT) strategy to effectively utilize RGB-based models pre-trained on large-scale datasets.
arXiv Detail & Related papers (2024-04-05T17:55:33Z) - RGB-based Category-level Object Pose Estimation via Decoupled Metric
Scale Recovery [72.13154206106259]
We propose a novel pipeline that decouples the 6D pose and size estimation to mitigate the influence of imperfect scales on rigid transformations.
Specifically, we leverage a pre-trained monocular estimator to extract local geometric information.
A separate branch is designed to directly recover the metric scale of the object based on category-level statistics.
arXiv Detail & Related papers (2023-09-19T02:20:26Z) - Single Image Depth Prediction Made Better: A Multivariate Gaussian Take [163.14849753700682]
We introduce an approach that performs continuous modeling of per-pixel depth.
Our method's accuracy (named MG) is among the top on the KITTI depth-prediction benchmark leaderboard.
arXiv Detail & Related papers (2023-03-31T16:01:03Z) - PDC: Piecewise Depth Completion utilizing Superpixels [0.0]
Current approaches often rely on CNN-based methods with several known drawbacks.
We propose our novel Piecewise Depth Completion (PDC), which works completely without deep learning.
In our evaluation, we can show both the influence of the individual proposed processing steps and the overall performance of our method on the challenging KITTI dataset.
arXiv Detail & Related papers (2021-07-14T13:58:39Z) - A Surface Geometry Model for LiDAR Depth Completion [19.33116596688515]
LiDAR depth completion is a task that predicts depth values for every pixel on the corresponding camera frame.
Most of the existing state-of-the-art solutions are based on deep neural networks, which need a large amount of data and heavy computations for training the models.
In this letter, a novel non-learning depth completion method is proposed by exploiting the local surface geometry that is enhanced by an outlier removal algorithm.
arXiv Detail & Related papers (2021-04-17T06:48:01Z) - Efficient Depth Completion Using Learned Bases [94.0808155168311]
We propose a new global geometry constraint for depth completion.
By assuming depth maps often lay on low dimensional subspaces, a dense depth map can be approximated by a weighted sum of full-resolution principal depth bases.
arXiv Detail & Related papers (2020-12-02T11:57:37Z) - Learning a Geometric Representation for Data-Efficient Depth Estimation
via Gradient Field and Contrastive Loss [29.798579906253696]
We propose a gradient-based self-supervised learning algorithm with momentum contrastive loss to help ConvNets extract the geometric information with unlabeled images.
Our method outperforms the previous state-of-the-art self-supervised learning algorithms and shows the efficiency of labeled data in triple.
arXiv Detail & Related papers (2020-11-06T06:47:19Z) - Occlusion-Aware Depth Estimation with Adaptive Normal Constraints [85.44842683936471]
We present a new learning-based method for multi-frame depth estimation from a color video.
Our method outperforms the state-of-the-art in terms of depth estimation accuracy.
arXiv Detail & Related papers (2020-04-02T07:10:45Z) - Single Image Depth Estimation Trained via Depth from Defocus Cues [105.67073923825842]
Estimating depth from a single RGB image is a fundamental task in computer vision.
In this work, we rely, instead of different views, on depth from focus cues.
We present results that are on par with supervised methods on KITTI and Make3D datasets and outperform unsupervised learning approaches.
arXiv Detail & Related papers (2020-01-14T20:22:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.