Depth Sensing Beyond LiDAR Range
- URL: http://arxiv.org/abs/2004.03048v1
- Date: Tue, 7 Apr 2020 00:09:51 GMT
- Title: Depth Sensing Beyond LiDAR Range
- Authors: Kai Zhang, Jiaxin Xie, Noah Snavely, Qifeng Chen
- Abstract summary: We propose a novel three-camera system that utilizes small field of view cameras.
Our system, along with our novel algorithm for computing metric depth, does not require full pre-calibration.
It can output dense depth maps with practically acceptable accuracy for scenes and objects at long distances.
- Score: 84.19507822574568
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Depth sensing is a critical component of autonomous driving technologies, but
today's LiDAR- or stereo camera-based solutions have limited range. We seek to
increase the maximum range of self-driving vehicles' depth perception modules
for the sake of better safety. To that end, we propose a novel three-camera
system that utilizes small field of view cameras. Our system, along with our
novel algorithm for computing metric depth, does not require full
pre-calibration and can output dense depth maps with practically acceptable
accuracy for scenes and objects at long distances not well covered by most
commercial LiDARs.
Related papers
- Better Monocular 3D Detectors with LiDAR from the Past [64.6759926054061]
Camera-based 3D detectors often suffer inferior performance compared to LiDAR-based counterparts due to inherent depth ambiguities in images.
In this work, we seek to improve monocular 3D detectors by leveraging unlabeled historical LiDAR data.
We show consistent and significant performance gain across multiple state-of-the-art models and datasets with a negligible additional latency of 9.66 ms and a small storage cost.
arXiv Detail & Related papers (2024-04-08T01:38:43Z) - Neural Rendering based Urban Scene Reconstruction for Autonomous Driving [8.007494499012624]
We propose a multimodal 3D scene reconstruction using a framework combining neural implicit surfaces and radiance fields.
Dense 3D reconstruction has many applications in automated driving including automated annotation validation.
We demonstrate qualitative and quantitative results on challenging automotive scenes.
arXiv Detail & Related papers (2024-02-09T23:20:23Z) - Unsupervised confidence for LiDAR depth maps and applications [43.474845978673166]
We propose an effective unsupervised framework aimed at addressing the issue of sparse LiDAR depth maps.
Our framework estimates the confidence of the sparse depth map and thus allows for filtering out the outliers.
We demonstrate how this achievement can improve a wide range of tasks.
arXiv Detail & Related papers (2022-10-06T17:59:58Z) - SurroundDepth: Entangling Surrounding Views for Self-Supervised
Multi-Camera Depth Estimation [101.55622133406446]
We propose a SurroundDepth method to incorporate the information from multiple surrounding views to predict depth maps across cameras.
Specifically, we employ a joint network to process all the surrounding views and propose a cross-view transformer to effectively fuse the information from multiple views.
In experiments, our method achieves the state-of-the-art performance on the challenging multi-camera depth estimation datasets.
arXiv Detail & Related papers (2022-04-07T17:58:47Z) - Rope3D: TheRoadside Perception Dataset for Autonomous Driving and
Monocular 3D Object Detection Task [48.555440807415664]
We present the first high-diversity challenging Roadside Perception 3D dataset- Rope3D from a novel view.
The dataset consists of 50k images and over 1.5M 3D objects in various scenes.
We propose to leverage the geometry constraint to solve the inherent ambiguities caused by various sensors, viewpoints.
arXiv Detail & Related papers (2022-03-25T12:13:23Z) - Event Guided Depth Sensing [50.997474285910734]
We present an efficient bio-inspired event-camera-driven depth estimation algorithm.
In our approach, we illuminate areas of interest densely, depending on the scene activity detected by the event camera.
We show the feasibility of our approach in a simulated autonomous driving sequences and real indoor environments.
arXiv Detail & Related papers (2021-10-20T11:41:11Z) - LiDARTouch: Monocular metric depth estimation with a few-beam LiDAR [40.98198236276633]
Vision-based depth estimation is a key feature in autonomous systems.
In such a monocular setup, dense depth is obtained with either additional input from one or several expensive LiDARs.
In this paper, we propose a new alternative of densely estimating metric depth by combining a monocular camera with a light-weight LiDAR.
arXiv Detail & Related papers (2021-09-08T12:06:31Z) - A Hybrid mmWave and Camera System for Long-Range Depth Imaging [6.665586494560167]
mmWave radars offer excellent depth resolution owing to their high bandwidth at mmWave radio frequencies.
Yet, they suffer intrinsically from poor angular resolution, that is an order-of-magnitude worse than camera systems, and are therefore not a capable 3-D imaging solution in isolation.
We propose Metamoran, a system that combines the complimentary strengths of radar and camera systems to obtain depth images at high azimuthal resolutions at distances of several tens of meters with high accuracy, all from a single fixed vantage point.
arXiv Detail & Related papers (2021-06-15T03:19:35Z) - EagerMOT: 3D Multi-Object Tracking via Sensor Fusion [68.8204255655161]
Multi-object tracking (MOT) enables mobile robots to perform well-informed motion planning and navigation by localizing surrounding objects in 3D space and time.
Existing methods rely on depth sensors (e.g., LiDAR) to detect and track targets in 3D space, but only up to a limited sensing range due to the sparsity of the signal.
We propose EagerMOT, a simple tracking formulation that integrates all available object observations from both sensor modalities to obtain a well-informed interpretation of the scene dynamics.
arXiv Detail & Related papers (2021-04-29T22:30:29Z) - Full Surround Monodepth from Multiple Cameras [31.145598985137468]
We extend self-supervised monocular depth and ego-motion estimation to large photo-baseline multi-camera rigs.
We learn a single network generating dense, consistent, and scale-aware point clouds that cover the same full surround 360 degree field of view as a typical LiDAR scanner.
arXiv Detail & Related papers (2021-03-31T22:52:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.