Depth Estimation from Monocular Images and Sparse Radar Data
- URL: http://arxiv.org/abs/2010.00058v1
- Date: Wed, 30 Sep 2020 19:01:33 GMT
- Title: Depth Estimation from Monocular Images and Sparse Radar Data
- Authors: Juan-Ting Lin, Dengxin Dai, and Luc Van Gool
- Abstract summary: In this paper, we explore the possibility of achieving a more accurate depth estimation by fusing monocular images and Radar points using a deep neural network.
We find that the noise existing in Radar measurements is one of the main key reasons that prevents one from applying the existing fusion methods.
The experiments are conducted on the nuScenes dataset, which is one of the first datasets which features Camera, Radar, and LiDAR recordings in diverse scenes and weather conditions.
- Score: 93.70524512061318
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we explore the possibility of achieving a more accurate depth
estimation by fusing monocular images and Radar points using a deep neural
network. We give a comprehensive study of the fusion between RGB images and
Radar measurements from different aspects and proposed a working solution based
on the observations. We find that the noise existing in Radar measurements is
one of the main key reasons that prevents one from applying the existing fusion
methods developed for LiDAR data and images to the new fusion problem between
Radar data and images. The experiments are conducted on the nuScenes dataset,
which is one of the first datasets which features Camera, Radar, and LiDAR
recordings in diverse scenes and weather conditions. Extensive experiments
demonstrate that our method outperforms existing fusion methods. We also
provide detailed ablation studies to show the effectiveness of each component
in our method.
Related papers
- A Generative Adversarial Network-based Method for LiDAR-Assisted Radar Image Enhancement [0.8528401618469594]
This paper presents a generative adversarial network (GAN) based approach for radar image enhancement.
The proposed method utilizes high-resolution, two-dimensional (2D) projected light detection and ranging (LiDAR) point clouds as ground truth images.
The effectiveness of the proposed method is demonstrated through both qualitative and quantitative results.
arXiv Detail & Related papers (2024-08-30T18:22:39Z) - Depth Estimation fusing Image and Radar Measurements with Uncertain Directions [14.206589791912458]
In prior radar-image fusion work, image features are merged with the uncertain sparse depths measured by radar through convolutional layers.
Our method avoids this problem by computing features only with an image and conditioning the features pixelwise with the radar depth.
Our method improves training data by learning only these possibly correct radar directions, while the previous method trains raw radar measurements.
arXiv Detail & Related papers (2024-03-23T10:16:36Z) - RadarCam-Depth: Radar-Camera Fusion for Depth Estimation with Learned Metric Scale [21.09258172290667]
We present a novel approach for metric dense depth estimation based on the fusion of a single-view image and a sparse, noisy Radar point cloud.
Our proposed method significantly outperforms the state-of-the-art Radar-Camera depth estimation methods by reducing the mean absolute error (MAE) of depth estimation by 25.6% and 40.2% on the challenging nuScenes dataset and our self-collected ZJU-4DRadarCam dataset, respectively.
arXiv Detail & Related papers (2024-01-09T02:40:03Z) - Diffusion Models for Interferometric Satellite Aperture Radar [73.01013149014865]
Probabilistic Diffusion Models (PDMs) have recently emerged as a very promising class of generative models.
Here, we leverage PDMs to generate several radar-based satellite image datasets.
We show that PDMs succeed in generating images with complex and realistic structures, but that sampling time remains an issue.
arXiv Detail & Related papers (2023-08-31T16:26:17Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - Robust Human Detection under Visual Degradation via Thermal and mmWave
Radar Fusion [4.178845249771262]
We present a multimodal human detection system that combines portable thermal cameras and single-chip mmWave radars.
We propose a Bayesian feature extractor and a novel uncertainty-guided fusion method that surpasses a variety of competing methods.
We evaluate the proposed method on real-world data collection and demonstrate that our approach outperforms the state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2023-07-07T14:23:20Z) - Bi-LRFusion: Bi-Directional LiDAR-Radar Fusion for 3D Dynamic Object
Detection [78.59426158981108]
We introduce a bi-directional LiDAR-Radar fusion framework, termed Bi-LRFusion, to tackle the challenges and improve 3D detection for dynamic objects.
We conduct extensive experiments on nuScenes and ORR datasets, and show that our Bi-LRFusion achieves state-of-the-art performance for detecting dynamic objects.
arXiv Detail & Related papers (2023-06-02T10:57:41Z) - Boosting 3D Object Detection by Simulating Multimodality on Point Clouds [51.87740119160152]
This paper presents a new approach to boost a single-modality (LiDAR) 3D object detector by teaching it to simulate features and responses that follow a multi-modality (LiDAR-image) detector.
The approach needs LiDAR-image data only when training the single-modality detector, and once well-trained, it only needs LiDAR data at inference.
Experimental results on the nuScenes dataset show that our approach outperforms all SOTA LiDAR-only 3D detectors.
arXiv Detail & Related papers (2022-06-30T01:44:30Z) - Depth Estimation from Monocular Images and Sparse radar using Deep
Ordinal Regression Network [2.0446891814677692]
We integrate sparse radar data into a monocular depth estimation model and introduce a novel preprocessing method for reducing the sparseness and limited field of view provided by radar.
We propose a novel method for estimating dense depth maps from monocular 2D images and sparse radar measurements using deep learning based on the deep ordinal regression network by Fu et al.
arXiv Detail & Related papers (2021-07-15T20:17:48Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.