R4Dyn: Exploring Radar for Self-Supervised Monocular Depth Estimation of
Dynamic Scenes
- URL: http://arxiv.org/abs/2108.04814v1
- Date: Tue, 10 Aug 2021 17:57:03 GMT
- Title: R4Dyn: Exploring Radar for Self-Supervised Monocular Depth Estimation of
Dynamic Scenes
- Authors: Stefano Gasperini, Patrick Koch, Vinzenz Dallabetta, Nassir Navab,
Benjamin Busam, Federico Tombari
- Abstract summary: Self-supervised monocular depth estimation in driving scenarios has achieved comparable performance to supervised approaches.
We present R4Dyn, a novel set of techniques to use cost-efficient radar data on top of a self-supervised depth estimation framework.
- Score: 69.6715406227469
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While self-supervised monocular depth estimation in driving scenarios has
achieved comparable performance to supervised approaches, violations of the
static world assumption can still lead to erroneous depth predictions of
traffic participants, posing a potential safety issue. In this paper, we
present R4Dyn, a novel set of techniques to use cost-efficient radar data on
top of a self-supervised depth estimation framework. In particular, we show how
radar can be used during training as weak supervision signal, as well as an
extra input to enhance the estimation robustness at inference time. Since
automotive radars are readily available, this allows to collect training data
from a variety of existing vehicles. Moreover, by filtering and expanding the
signal to make it compatible with learning-based approaches, we address radar
inherent issues, such as noise and sparsity. With R4Dyn we are able to overcome
a major limitation of self-supervised depth estimation, i.e. the prediction of
traffic participants. We substantially improve the estimation on dynamic
objects, such as cars by 37% on the challenging nuScenes dataset, hence
demonstrating that radar is a valuable additional sensor for monocular depth
estimation in autonomous vehicles. Additionally, we plan on making the code
publicly available.
Related papers
- RadarOcc: Robust 3D Occupancy Prediction with 4D Imaging Radar [15.776076554141687]
3D occupancy-based perception pipeline has significantly advanced autonomous driving.
Current methods rely on LiDAR or camera inputs for 3D occupancy prediction.
We introduce a novel approach that utilizes 4D imaging radar sensors for 3D occupancy prediction.
arXiv Detail & Related papers (2024-05-22T21:48:17Z) - Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - Leveraging Self-Supervised Instance Contrastive Learning for Radar
Object Detection [7.728838099011661]
This paper presents RiCL, an instance contrastive learning framework to pre-train radar object detectors.
We aim to pre-train an object detector's backbone, head and neck to learn with fewer data.
arXiv Detail & Related papers (2024-02-13T12:53:33Z) - Bootstrapping Autonomous Driving Radars with Self-Supervised Learning [13.13679517730015]
Training radar models is hindered by the cost and difficulty of annotating large-scale radar data.
We propose a self-supervised learning framework to leverage the large amount of unlabeled radar data to pre-train radar-only embeddings for self-driving perception tasks.
When used for downstream object detection, we demonstrate that the proposed self-supervision framework can improve the accuracy of state-of-the-art supervised baselines by $5.8%$ in mAP.
arXiv Detail & Related papers (2023-12-07T18:38:39Z) - Self-Supervised Scene Flow Estimation with 4D Automotive Radar [7.3287286038702035]
It remains largely unknown how to estimate the scene flow from a 4D radar.
Compared with the LiDAR point clouds, radar data are drastically sparser, noisier and in much lower resolution.
This work aims to address the above challenges and estimate scene flow from 4D radar point clouds by leveraging self-supervised learning.
arXiv Detail & Related papers (2022-03-02T14:28:12Z) - How much depth information can radar infer and contribute [1.5899159309486681]
We investigate the intrinsic depth capability of radar data using state-of-the-art depth estimation models.
Our experiments demonstrate that the estimated depth from only sparse radar input can detect the shape of surroundings to a certain extent.
arXiv Detail & Related papers (2022-02-26T20:02:47Z) - LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar
Fusion [52.59664614744447]
We present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous velocity measurements.
arXiv Detail & Related papers (2020-10-02T00:13:00Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.