Keep off the Grass: Permissible Driving Routes from Radar with Weak
Audio Supervision
- URL: http://arxiv.org/abs/2005.05175v2
- Date: Tue, 22 Sep 2020 07:28:19 GMT
- Title: Keep off the Grass: Permissible Driving Routes from Radar with Weak
Audio Supervision
- Authors: David Williams, Daniele De Martini, Matthew Gadd, Letizia Marchegiani,
Paul Newman
- Abstract summary: Perception systems based on FMCW scanning radar maintain full performance regardless of environmental conditions.
By combining odometry, GPS and the terrain labels from the audio classifier, we are able to construct a terrain labelled trajectory of the robot.
Using a curriculum learning procedure, we then train a radar segmentation network to generalise beyond the initial labelling.
- Score: 21.222339098241616
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reliable outdoor deployment of mobile robots requires the robust
identification of permissible driving routes in a given environment. The
performance of LiDAR and vision-based perception systems deteriorates
significantly if certain environmental factors are present e.g. rain, fog,
darkness. Perception systems based on FMCW scanning radar maintain full
performance regardless of environmental conditions and with a longer range than
alternative sensors. Learning to segment a radar scan based on driveability in
a fully supervised manner is not feasible as labelling each radar scan on a
bin-by-bin basis is both difficult and time-consuming to do by hand. We
therefore weakly supervise the training of the radar-based classifier through
an audio-based classifier that is able to predict the terrain type underneath
the robot. By combining odometry, GPS and the terrain labels from the audio
classifier, we are able to construct a terrain labelled trajectory of the robot
in the environment which is then used to label the radar scans. Using a
curriculum learning procedure, we then train a radar segmentation network to
generalise beyond the initial labelling and to detect all permissible driving
routes in the environment.
Related papers
- Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - Bootstrapping Autonomous Driving Radars with Self-Supervised Learning [13.13679517730015]
Training radar models is hindered by the cost and difficulty of annotating large-scale radar data.
We propose a self-supervised learning framework to leverage the large amount of unlabeled radar data to pre-train radar-only embeddings for self-driving perception tasks.
When used for downstream object detection, we demonstrate that the proposed self-supervision framework can improve the accuracy of state-of-the-art supervised baselines by $5.8%$ in mAP.
arXiv Detail & Related papers (2023-12-07T18:38:39Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - NVRadarNet: Real-Time Radar Obstacle and Free Space Detection for
Autonomous Driving [57.03126447713602]
We present a deep neural network (DNN) that detects dynamic obstacles and drivable free space using automotive RADAR sensors.
The network runs faster than real time on an embedded GPU and shows good generalization across geographic regions.
arXiv Detail & Related papers (2022-09-29T01:30:34Z) - Unsupervised Domain Adaptation across FMCW Radar Configurations Using
Margin Disparity Discrepancy [17.464353263281907]
In this work, we consider the problem of unsupervised domain adaptation across radar configurations in the context of deep-learning human activity classification.
We focus on the theory-inspired technique of Margin Disparity Discrepancy, which has already been proved successful in the area of computer vision.
Our experiments extend this technique to radar data, achieving a comparable accuracy to fewshot supervised approaches for the same classification problem.
arXiv Detail & Related papers (2022-03-09T09:11:06Z) - Deep Instance Segmentation with High-Resolution Automotive Radar [2.167586397005864]
We propose two efficient methods for instance segmentation with radar detection points.
One is implemented in an end-to-end deep learning driven fashion using PointNet++ framework.
The other is based on clustering of the radar detection points with semantic information.
arXiv Detail & Related papers (2021-10-05T01:18:27Z) - R4Dyn: Exploring Radar for Self-Supervised Monocular Depth Estimation of
Dynamic Scenes [69.6715406227469]
Self-supervised monocular depth estimation in driving scenarios has achieved comparable performance to supervised approaches.
We present R4Dyn, a novel set of techniques to use cost-efficient radar data on top of a self-supervised depth estimation framework.
arXiv Detail & Related papers (2021-08-10T17:57:03Z) - Complex-valued Convolutional Neural Networks for Enhanced Radar Signal
Denoising and Interference Mitigation [73.0103413636673]
We propose the use of Complex-Valued Convolutional Neural Networks (CVCNNs) to address the issue of mutual interference between radar sensors.
CVCNNs increase data efficiency, speeds up network training and substantially improves the conservation of phase information during interference removal.
arXiv Detail & Related papers (2021-04-29T10:06:29Z) - Radar Artifact Labeling Framework (RALF): Method for Plausible Radar
Detections in Datasets [2.5899040911480187]
We propose a cross sensor Radar Artifact Labeling Framework (RALF) for labeling sparse radar point clouds.
RALF provides plausibility labels for radar raw detections, distinguishing between artifacts and targets.
We validate the results by evaluating error metrics on semi-manually labeled ground truth dataset of $3.28cdot106$ points.
arXiv Detail & Related papers (2020-12-03T15:11:31Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z) - RSS-Net: Weakly-Supervised Multi-Class Semantic Segmentation with FMCW
Radar [26.56755178602111]
We advocate radar over the traditional sensors used for this task as it operates at longer ranges and is substantially more robust to adverse weather and illumination conditions.
We exploit the largest radar-focused urban autonomy dataset collected to date, correlating radar scans with RGB cameras and LiDAR sensors.
We present the network with multi-channel radar scan inputs in order to deal with ephemeral and dynamic scene objects.
arXiv Detail & Related papers (2020-04-02T11:40:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.