Robust and accurate depth estimation by fusing LiDAR and Stereo
- URL: http://arxiv.org/abs/2207.06139v1
- Date: Wed, 13 Jul 2022 11:55:15 GMT
- Title: Robust and accurate depth estimation by fusing LiDAR and Stereo
- Authors: Guangyao Xu, Junfeng Fan, En Li, Xiaoyu Long, and Rui Guo
- Abstract summary: We propose a precision and robust method for fusing the LiDAR and stereo cameras.
This method fully combines the advantages of the LiDAR and stereo camera.
We evaluate the proposed pipeline on the KITTI benchmark.
- Score: 8.85338187686374
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Depth estimation is one of the key technologies in some fields such as
autonomous driving and robot navigation. However, the traditional method of
using a single sensor is inevitably limited by the performance of the sensor.
Therefore, a precision and robust method for fusing the LiDAR and stereo
cameras is proposed. This method fully combines the advantages of the LiDAR and
stereo camera, which can retain the advantages of the high precision of the
LiDAR and the high resolution of images respectively. Compared with the
traditional stereo matching method, the texture of the object and lighting
conditions have less influence on the algorithm. Firstly, the depth of the
LiDAR data is converted to the disparity of the stereo camera. Because the
density of the LiDAR data is relatively sparse on the y-axis, the converted
disparity map is up-sampled using the interpolation method. Secondly, in order
to make full use of the precise disparity map, the disparity map and stereo
matching are fused to propagate the accurate disparity. Finally, the disparity
map is converted to the depth map. Moreover, the converted disparity map can
also increase the speed of the algorithm. We evaluate the proposed pipeline on
the KITTI benchmark. The experiment demonstrates that our algorithm has higher
accuracy than several classic methods.
Related papers
- Stereo-LiDAR Depth Estimation with Deformable Propagation and Learned Disparity-Depth Conversion [16.164300644900404]
We propose a novel stereo-LiDAR depth estimation network with Semi-Dense hint Guidance, named SDG-Depth.
Our network includes a deformable propagation module for generating a semi-dense hint map and a confidence map by propagating sparse hints using a learned deformable window.
Our method is both accurate and efficient. The experimental results on benchmark tests show its superior performance.
arXiv Detail & Related papers (2024-04-11T08:12:48Z) - Robust Depth Enhancement via Polarization Prompt Fusion Tuning [112.88371907047396]
We present a framework that leverages polarization imaging to improve inaccurate depth measurements from various depth sensors.
Our method first adopts a learning-based strategy where a neural network is trained to estimate a dense and complete depth map from polarization data and a sensor depth map from different sensors.
To further improve the performance, we propose a Polarization Prompt Fusion Tuning (PPFT) strategy to effectively utilize RGB-based models pre-trained on large-scale datasets.
arXiv Detail & Related papers (2024-04-05T17:55:33Z) - Depth Estimation fusing Image and Radar Measurements with Uncertain Directions [14.206589791912458]
In prior radar-image fusion work, image features are merged with the uncertain sparse depths measured by radar through convolutional layers.
Our method avoids this problem by computing features only with an image and conditioning the features pixelwise with the radar depth.
Our method improves training data by learning only these possibly correct radar directions, while the previous method trains raw radar measurements.
arXiv Detail & Related papers (2024-03-23T10:16:36Z) - SDGE: Stereo Guided Depth Estimation for 360$^\circ$ Camera Sets [65.64958606221069]
Multi-camera systems are often used in autonomous driving to achieve a 360$circ$ perception.
These 360$circ$ camera sets often have limited or low-quality overlap regions, making multi-view stereo methods infeasible for the entire image.
We propose the Stereo Guided Depth Estimation (SGDE) method, which enhances depth estimation of the full image by explicitly utilizing multi-view stereo results on the overlap.
arXiv Detail & Related papers (2024-02-19T02:41:37Z) - Non-learning Stereo-aided Depth Completion under Mis-projection via
Selective Stereo Matching [0.5067618621449753]
We propose a non-learning depth completion method for a sparse depth map captured using a light detection and ranging (LiDAR) sensor guided by a pair of stereo images.
The proposed method reduced the mean absolute error (MAE) of the depth estimation to 0.65 times and demonstrated approximately twice more accurate estimation in the long range.
arXiv Detail & Related papers (2022-10-04T07:46:56Z) - High-Resolution Depth Maps Based on TOF-Stereo Fusion [27.10059147107254]
We propose a novel TOF-stereo fusion method based on an efficient seed-growing algorithm.
We show that the proposed algorithm outperforms 2D image-based stereo algorithms.
The algorithm potentially exhibits real-time performance on a single CPU.
arXiv Detail & Related papers (2021-07-30T15:11:42Z) - SMD-Nets: Stereo Mixture Density Networks [68.56947049719936]
We propose Stereo Mixture Density Networks (SMD-Nets), a simple yet effective learning framework compatible with a wide class of 2D and 3D architectures.
Specifically, we exploit bimodal mixture densities as output representation and show that this allows for sharp and precise disparity estimates near discontinuities.
We carry out comprehensive experiments on a new high-resolution and highly realistic synthetic stereo dataset, consisting of stereo pairs at 8Mpx resolution, as well as on real-world stereo datasets.
arXiv Detail & Related papers (2021-04-08T16:15:46Z) - Fusion of Range and Stereo Data for High-Resolution Scene-Modeling [20.824550995195057]
This paper addresses the problem of range-stereo fusion, for the construction of high-resolution depth maps.
We combine low-resolution depth data with high-resolution stereo data, in a maximum a posteriori (MAP) formulation.
The accuracy of the method is not compromised, owing to three properties of the data-term in the energy function.
arXiv Detail & Related papers (2020-12-12T09:37:42Z) - Direct Depth Learning Network for Stereo Matching [79.3665881702387]
A novel Direct Depth Learning Network (DDL-Net) is designed for stereo matching.
DDL-Net consists of two stages: the Coarse Depth Estimation stage and the Adaptive-Grained Depth Refinement stage.
We show that DDL-Net achieves an average improvement of 25% on the SceneFlow dataset and $12%$ on the DrivingStereo dataset.
arXiv Detail & Related papers (2020-12-10T10:33:57Z) - Displacement-Invariant Cost Computation for Efficient Stereo Matching [122.94051630000934]
Deep learning methods have dominated stereo matching leaderboards by yielding unprecedented disparity accuracy.
But their inference time is typically slow, on the order of seconds for a pair of 540p images.
We propose a emphdisplacement-invariant cost module to compute the matching costs without needing a 4D feature volume.
arXiv Detail & Related papers (2020-12-01T23:58:16Z) - Multi-View Photometric Stereo: A Robust Solution and Benchmark Dataset
for Spatially Varying Isotropic Materials [65.95928593628128]
We present a method to capture both 3D shape and spatially varying reflectance with a multi-view photometric stereo technique.
Our algorithm is suitable for perspective cameras and nearby point light sources.
arXiv Detail & Related papers (2020-01-18T12:26:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.