RIDERS: Radar-Infrared Depth Estimation for Robust Sensing
- URL: http://arxiv.org/abs/2402.02067v1
- Date: Sat, 3 Feb 2024 07:14:43 GMT
- Title: RIDERS: Radar-Infrared Depth Estimation for Robust Sensing
- Authors: Han Li, Yukai Ma, Yuehao Huang, Yaqing Gu, Weihua Xu, Yong Liu,
Xingxing Zuo
- Abstract summary: Adverse weather conditions pose significant challenges to accurate dense depth estimation.
We present a novel approach for robust metric depth estimation by fusing a millimeter-wave Radar and a monocular infrared thermal camera.
Our method achieves exceptional visual quality and accurate metric estimation by addressing the challenges of ambiguity and misalignment.
- Score: 22.10378524682712
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dense depth recovery is crucial in autonomous driving, serving as a
foundational element for obstacle avoidance, 3D object detection, and local
path planning. Adverse weather conditions, including haze, dust, rain, snow,
and darkness, introduce significant challenges to accurate dense depth
estimation, thereby posing substantial safety risks in autonomous driving.
These challenges are particularly pronounced for traditional depth estimation
methods that rely on short electromagnetic wave sensors, such as visible
spectrum cameras and near-infrared LiDAR, due to their susceptibility to
diffraction noise and occlusion in such environments. To fundamentally overcome
this issue, we present a novel approach for robust metric depth estimation by
fusing a millimeter-wave Radar and a monocular infrared thermal camera, which
are capable of penetrating atmospheric particles and unaffected by lighting
conditions. Our proposed Radar-Infrared fusion method achieves highly accurate
and finely detailed dense depth estimation through three stages, including
monocular depth prediction with global scale alignment, quasi-dense Radar
augmentation by learning Radar-pixels correspondences, and local scale
refinement of dense depth using a scale map learner. Our method achieves
exceptional visual quality and accurate metric estimation by addressing the
challenges of ambiguity and misalignment that arise from directly fusing
multi-modal long-wave features. We evaluate the performance of our approach on
the NTU4DRadLM dataset and our self-collected challenging ZJU-Multispectrum
dataset. Especially noteworthy is the unprecedented robustness demonstrated by
our proposed method in smoky scenarios. Our code will be released at
\url{https://github.com/MMOCKING/RIDERS}.
Related papers
- GET-UP: GEomeTric-aware Depth Estimation with Radar Points UPsampling [7.90238039959534]
Existing algorithms process radar data by projecting 3D points onto the image plane for pixel-level feature extraction.
We propose GET-UP, leveraging attention-enhanced Graph Neural Networks (GNN) to exchange and aggregate both 2D and 3D information from radar data.
We benchmark our proposed GET-UP on the nuScenes dataset, achieving state-of-the-art performance with a 15.3% and 14.7% improvement in MAE and RMSE over the previously best-performing model.
arXiv Detail & Related papers (2024-09-02T14:15:09Z) - CaFNet: A Confidence-Driven Framework for Radar Camera Depth Estimation [6.9404362058736995]
This paper introduces a two-stage, end-to-end trainable Confidence-aware Fusion Net (CaFNet) for dense depth estimation.
The first stage addresses radar-specific challenges, such as ambiguous elevation and noisy measurements.
For the final depth estimation, we innovate a confidence-aware gated fusion mechanism to integrate radar and image features effectively.
arXiv Detail & Related papers (2024-06-30T13:39:29Z) - Unveiling the Depths: A Multi-Modal Fusion Framework for Challenging
Scenarios [103.72094710263656]
This paper presents a novel approach that identifies and integrates dominant cross-modality depth features with a learning-based framework.
We propose a novel confidence loss steering a confidence predictor network to yield a confidence map specifying latent potential depth areas.
With the resulting confidence map, we propose a multi-modal fusion network that fuses the final depth in an end-to-end manner.
arXiv Detail & Related papers (2024-02-19T04:39:16Z) - RadarCam-Depth: Radar-Camera Fusion for Depth Estimation with Learned Metric Scale [21.09258172290667]
We present a novel approach for metric dense depth estimation based on the fusion of a single-view image and a sparse, noisy Radar point cloud.
Our proposed method significantly outperforms the state-of-the-art Radar-Camera depth estimation methods by reducing the mean absolute error (MAE) of depth estimation by 25.6% and 40.2% on the challenging nuScenes dataset and our self-collected ZJU-4DRadarCam dataset, respectively.
arXiv Detail & Related papers (2024-01-09T02:40:03Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - CramNet: Camera-Radar Fusion with Ray-Constrained Cross-Attention for
Robust 3D Object Detection [12.557361522985898]
We propose a camera-radar matching network CramNet to fuse the sensor readings from camera and radar in a joint 3D space.
Our method supports training with sensor modality dropout, which leads to robust 3D object detection, even when a camera or radar sensor suddenly malfunctions on a vehicle.
arXiv Detail & Related papers (2022-10-17T17:18:47Z) - RVMDE: Radar Validated Monocular Depth Estimation for Robotics [5.360594929347198]
An innate rigid calibration of binocular vision sensors is crucial for accurate depth estimation.
Alternatively, a monocular camera alleviates the limitation at the expense of accuracy in estimating depth, and the challenge exacerbates in harsh environmental conditions.
This work explores the utility of coarse signals from radar when fused with fine-grained data from a monocular camera for depth estimation in harsh environmental conditions.
arXiv Detail & Related papers (2021-09-11T12:02:29Z) - R4Dyn: Exploring Radar for Self-Supervised Monocular Depth Estimation of
Dynamic Scenes [69.6715406227469]
Self-supervised monocular depth estimation in driving scenarios has achieved comparable performance to supervised approaches.
We present R4Dyn, a novel set of techniques to use cost-efficient radar data on top of a self-supervised depth estimation framework.
arXiv Detail & Related papers (2021-08-10T17:57:03Z) - LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar
Fusion [52.59664614744447]
We present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous velocity measurements.
arXiv Detail & Related papers (2020-10-02T00:13:00Z) - Depth Estimation from Monocular Images and Sparse Radar Data [93.70524512061318]
In this paper, we explore the possibility of achieving a more accurate depth estimation by fusing monocular images and Radar points using a deep neural network.
We find that the noise existing in Radar measurements is one of the main key reasons that prevents one from applying the existing fusion methods.
The experiments are conducted on the nuScenes dataset, which is one of the first datasets which features Camera, Radar, and LiDAR recordings in diverse scenes and weather conditions.
arXiv Detail & Related papers (2020-09-30T19:01:33Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.