ADCNet: Learning from Raw Radar Data via Distillation
- URL: http://arxiv.org/abs/2303.11420v3
- Date: Wed, 13 Dec 2023 17:50:32 GMT
- Title: ADCNet: Learning from Raw Radar Data via Distillation
- Authors: Bo Yang, Ishan Khatri, Michael Happold, Chulong Chen
- Abstract summary: Radar-based systems are lower cost and more robust to adverse weather conditions than their LiDAR-based counterparts.
Recent research has focused on consuming the raw radar data, instead of the final radar point cloud.
We show that by bringing elements of the signal processing pipeline into our network and then pre-training on the signal processing task, we are able to achieve state of the art detection performance.
- Score: 3.519713957675842
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As autonomous vehicles and advanced driving assistance systems have entered
wider deployment, there is an increased interest in building robust perception
systems using radars. Radar-based systems are lower cost and more robust to
adverse weather conditions than their LiDAR-based counterparts; however the
point clouds produced are typically noisy and sparse by comparison. In order to
combat these challenges, recent research has focused on consuming the raw radar
data, instead of the final radar point cloud. We build on this line of work and
demonstrate that by bringing elements of the signal processing pipeline into
our network and then pre-training on the signal processing task, we are able to
achieve state of the art detection performance on the RADIal dataset. Our
method uses expensive offline signal processing algorithms to pseudo-label data
and trains a network to distill this information into a fast convolutional
backbone, which can then be finetuned for perception tasks. Extensive
experiment results corroborate the effectiveness of the proposed techniques.
Related papers
- SparseRadNet: Sparse Perception Neural Network on Subsampled Radar Data [5.344444942640663]
Radar raw data often contains excessive noise, whereas radar point clouds retain only limited information.
We introduce an adaptive subsampling method together with a tailored network architecture that exploits the sparsity patterns.
Experiments on the RADIal dataset show that our SparseRadNet exceeds state-of-the-art (SOTA) performance in object detection and achieves close to SOTA accuracy in freespace segmentation.
arXiv Detail & Related papers (2024-06-15T11:26:10Z) - Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - RadarFormer: Lightweight and Accurate Real-Time Radar Object Detection
Model [13.214257841152033]
Radar-centric data sets do not get a lot of attention in the development of deep learning techniques for radar perception.
We propose a transformers-based model, named RadarFormer, that utilizes state-of-the-art developments in vision deep learning.
Our model also introduces a channel-chirp-time merging module that reduces the size and complexity of our models by more than 10 times without compromising accuracy.
arXiv Detail & Related papers (2023-04-17T17:07:35Z) - Automotive RADAR sub-sampling via object detection networks: Leveraging
prior signal information [18.462990836437626]
Automotive radar has increasingly attracted attention due to growing interest in autonomous driving technologies.
We present a novel adaptive radar sub-sampling algorithm designed to identify regions that require more detailed/accurate reconstruction based on prior environmental conditions' knowledge.
arXiv Detail & Related papers (2023-02-21T05:32:28Z) - ERASE-Net: Efficient Segmentation Networks for Automotive Radar Signals [13.035425992944543]
We introduce ERASE-Net, an Efficient RAdar SEgmentation Network to segment raw radar signals semantically.
We show that our method can achieve superior performance on radar semantic segmentation task compared to the state-of-the-art (SOTA) technique.
arXiv Detail & Related papers (2022-09-26T18:23:22Z) - End-to-end system for object detection from sub-sampled radar data [18.462990836437626]
We present an end-to-end signal processing pipeline that relies on sub-sampled radar data to perform object detection in vehicular settings.
We show robust detection based on radar data reconstructed using 20% of samples under extreme weather conditions.
We generate 20% sampled radar data in a fine-tuning set and show 1.1% gain in AP50 across scenes and 3% AP50 gain in motorway condition.
arXiv Detail & Related papers (2022-03-08T08:02:33Z) - Complex-valued Convolutional Neural Networks for Enhanced Radar Signal
Denoising and Interference Mitigation [73.0103413636673]
We propose the use of Complex-Valued Convolutional Neural Networks (CVCNNs) to address the issue of mutual interference between radar sensors.
CVCNNs increase data efficiency, speeds up network training and substantially improves the conservation of phase information during interference removal.
arXiv Detail & Related papers (2021-04-29T10:06:29Z) - LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar
Fusion [52.59664614744447]
We present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous velocity measurements.
arXiv Detail & Related papers (2020-10-02T00:13:00Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.