Unsupervised confidence for LiDAR depth maps and applications
- URL: http://arxiv.org/abs/2210.03118v1
- Date: Thu, 6 Oct 2022 17:59:58 GMT
- Title: Unsupervised confidence for LiDAR depth maps and applications
- Authors: Andrea Conti, Matteo Poggi, Filippo Aleotti and Stefano Mattoccia
- Abstract summary: We propose an effective unsupervised framework aimed at addressing the issue of sparse LiDAR depth maps.
Our framework estimates the confidence of the sparse depth map and thus allows for filtering out the outliers.
We demonstrate how this achievement can improve a wide range of tasks.
- Score: 43.474845978673166
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Depth perception is pivotal in many fields, such as robotics and autonomous
driving, to name a few. Consequently, depth sensors such as LiDARs rapidly
spread in many applications. The 3D point clouds generated by these sensors
must often be coupled with an RGB camera to understand the framed scene
semantically. Usually, the former is projected over the camera image plane,
leading to a sparse depth map. Unfortunately, this process, coupled with the
intrinsic issues affecting all the depth sensors, yields noise and gross
outliers in the final output. Purposely, in this paper, we propose an effective
unsupervised framework aimed at explicitly addressing this issue by learning to
estimate the confidence of the LiDAR sparse depth map and thus allowing for
filtering out the outliers. Experimental results on the KITTI dataset highlight
that our framework excels for this purpose. Moreover, we demonstrate how this
achievement can improve a wide range of tasks.
Related papers
- Multi-Modal Neural Radiance Field for Monocular Dense SLAM with a
Light-Weight ToF Sensor [58.305341034419136]
We present the first dense SLAM system with a monocular camera and a light-weight ToF sensor.
We propose a multi-modal implicit scene representation that supports rendering both the signals from the RGB camera and light-weight ToF sensor.
Experiments demonstrate that our system well exploits the signals of light-weight ToF sensors and achieves competitive results.
arXiv Detail & Related papers (2023-08-28T07:56:13Z) - Fully Self-Supervised Depth Estimation from Defocus Clue [79.63579768496159]
We propose a self-supervised framework that estimates depth purely from a sparse focal stack.
We show that our framework circumvents the needs for the depth and AIF image ground-truth, and receives superior predictions.
arXiv Detail & Related papers (2023-03-19T19:59:48Z) - Is my Depth Ground-Truth Good Enough? HAMMER -- Highly Accurate
Multi-Modal Dataset for DEnse 3D Scene Regression [34.95597838973912]
HAMMER is a dataset comprising depth estimates from multiple commonly used sensors for indoor depth estimation.
We construct highly reliable ground truth depth maps with the help of 3D scanners and aligned renderings.
A popular depth estimators is trained on this data and typical depth senosors.
arXiv Detail & Related papers (2022-05-09T21:25:09Z) - SurroundDepth: Entangling Surrounding Views for Self-Supervised
Multi-Camera Depth Estimation [101.55622133406446]
We propose a SurroundDepth method to incorporate the information from multiple surrounding views to predict depth maps across cameras.
Specifically, we employ a joint network to process all the surrounding views and propose a cross-view transformer to effectively fuse the information from multiple views.
In experiments, our method achieves the state-of-the-art performance on the challenging multi-camera depth estimation datasets.
arXiv Detail & Related papers (2022-04-07T17:58:47Z) - Joint Learning of Salient Object Detection, Depth Estimation and Contour
Extraction [91.43066633305662]
We propose a novel multi-task and multi-modal filtered transformer (MMFT) network for RGB-D salient object detection (SOD)
Specifically, we unify three complementary tasks: depth estimation, salient object detection and contour estimation. The multi-task mechanism promotes the model to learn the task-aware features from the auxiliary tasks.
Experiments show that it not only significantly surpasses the depth-based RGB-D SOD methods on multiple datasets, but also precisely predicts a high-quality depth map and salient contour at the same time.
arXiv Detail & Related papers (2022-03-09T17:20:18Z) - Gated2Gated: Self-Supervised Depth Estimation from Gated Images [22.415893281441928]
Gated cameras hold promise as an alternative to scanning LiDAR sensors with high-resolution 3D depth.
We propose an entirely self-supervised depth estimation method that uses gated intensity profiles and temporal consistency as a training signal.
arXiv Detail & Related papers (2021-12-04T19:47:38Z) - Multi-Modal Depth Estimation Using Convolutional Neural Networks [0.8701566919381223]
This paper addresses the problem of dense depth predictions from sparse distance sensor data and a single camera image on challenging weather conditions.
It explores the significance of different sensor modalities such as camera, Radar, and Lidar for estimating depth by applying Deep Learning approaches.
arXiv Detail & Related papers (2020-12-17T15:31:49Z) - Self-Attention Dense Depth Estimation Network for Unrectified Video
Sequences [6.821598757786515]
LiDAR and radar sensors are the hardware solution for real-time depth estimation.
Deep learning based self-supervised depth estimation methods have shown promising results.
We propose a self-attention based depth and ego-motion network for unrectified images.
arXiv Detail & Related papers (2020-05-28T21:53:53Z) - Depth Sensing Beyond LiDAR Range [84.19507822574568]
We propose a novel three-camera system that utilizes small field of view cameras.
Our system, along with our novel algorithm for computing metric depth, does not require full pre-calibration.
It can output dense depth maps with practically acceptable accuracy for scenes and objects at long distances.
arXiv Detail & Related papers (2020-04-07T00:09:51Z) - Uncertainty depth estimation with gated images for 3D reconstruction [14.51429478464939]
Gated imaging is an emerging technology for self-driving cars.
We extend the Gated2Depth framework with aleatoric uncertainty providing an additional confidence measure for the depth estimates.
arXiv Detail & Related papers (2020-03-11T06:00:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.