End-to-end system for object detection from sub-sampled radar data
- URL: http://arxiv.org/abs/2203.03905v1
- Date: Tue, 8 Mar 2022 08:02:33 GMT
- Title: End-to-end system for object detection from sub-sampled radar data
- Authors: Madhumitha Sakthi, Ahmed Tewfik, Marius Arvinte, Haris Vikalo
- Abstract summary: We present an end-to-end signal processing pipeline that relies on sub-sampled radar data to perform object detection in vehicular settings.
We show robust detection based on radar data reconstructed using 20% of samples under extreme weather conditions.
We generate 20% sampled radar data in a fine-tuning set and show 1.1% gain in AP50 across scenes and 3% AP50 gain in motorway condition.
- Score: 18.462990836437626
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robust and accurate sensing is of critical importance for advancing
autonomous automotive systems. The need to acquire situational awareness in
complex urban conditions using sensors such as radar has motivated research on
power and latency-efficient signal acquisition methods. In this paper, we
present an end-to-end signal processing pipeline, capable of operating in
extreme weather conditions, that relies on sub-sampled radar data to perform
object detection in vehicular settings. The results of the object detection are
further utilized to sub-sample forthcoming radar data, which stands in contrast
to prior work where the sub-sampling relies on image information. We show
robust detection based on radar data reconstructed using 20% of samples under
extreme weather conditions such as snow or fog, and on low-illuminated nights.
Additionally, we generate 20% sampled radar data in a fine-tuning set and show
1.1% gain in AP50 across scenes and 3% AP50 gain in motorway condition.
Related papers
- Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - RadarFormer: Lightweight and Accurate Real-Time Radar Object Detection
Model [13.214257841152033]
Radar-centric data sets do not get a lot of attention in the development of deep learning techniques for radar perception.
We propose a transformers-based model, named RadarFormer, that utilizes state-of-the-art developments in vision deep learning.
Our model also introduces a channel-chirp-time merging module that reduces the size and complexity of our models by more than 10 times without compromising accuracy.
arXiv Detail & Related papers (2023-04-17T17:07:35Z) - ADCNet: Learning from Raw Radar Data via Distillation [3.519713957675842]
Radar-based systems are lower cost and more robust to adverse weather conditions than their LiDAR-based counterparts.
Recent research has focused on consuming the raw radar data, instead of the final radar point cloud.
We show that by bringing elements of the signal processing pipeline into our network and then pre-training on the signal processing task, we are able to achieve state of the art detection performance.
arXiv Detail & Related papers (2023-03-21T13:31:15Z) - Automotive RADAR sub-sampling via object detection networks: Leveraging
prior signal information [18.462990836437626]
Automotive radar has increasingly attracted attention due to growing interest in autonomous driving technologies.
We present a novel adaptive radar sub-sampling algorithm designed to identify regions that require more detailed/accurate reconstruction based on prior environmental conditions' knowledge.
arXiv Detail & Related papers (2023-02-21T05:32:28Z) - Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of
Adverse Weather Conditions for 3D Object Detection [60.89616629421904]
Lidar-based object detectors are critical parts of the 3D perception pipeline in autonomous navigation systems such as self-driving cars.
They are sensitive to adverse weather conditions such as rain, snow and fog due to reduced signal-to-noise ratio (SNR) and signal-to-background ratio (SBR)
arXiv Detail & Related papers (2021-07-14T21:10:47Z) - Complex-valued Convolutional Neural Networks for Enhanced Radar Signal
Denoising and Interference Mitigation [73.0103413636673]
We propose the use of Complex-Valued Convolutional Neural Networks (CVCNNs) to address the issue of mutual interference between radar sensors.
CVCNNs increase data efficiency, speeds up network training and substantially improves the conservation of phase information during interference removal.
arXiv Detail & Related papers (2021-04-29T10:06:29Z) - All-Weather Object Recognition Using Radar and Infrared Sensing [1.7513645771137178]
This thesis explores new sensing developments based on long wave polarised infrared (IR) imagery and imaging radar to recognise objects.
First, we developed a methodology based on Stokes parameters using polarised infrared data to recognise vehicles using deep neural networks.
Second, we explored the potential of using only the power spectrum captured by low-THz radar sensors to perform object recognition in a controlled scenario.
Last, we created a new large-scale dataset in the "wild" with many different weather scenarios showing radar robustness to detect vehicles in adverse weather.
arXiv Detail & Related papers (2020-10-30T14:16:39Z) - LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar
Fusion [52.59664614744447]
We present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous velocity measurements.
arXiv Detail & Related papers (2020-10-02T00:13:00Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z) - Probabilistic Oriented Object Detection in Automotive Radar [8.281391209717103]
We propose a deep-learning based algorithm for radar object detection.
We created a new multimodal dataset with 102544 frames of raw radar and synchronized LiDAR data.
Our best performing radar detection model achieves 77.28% AP under oriented IoU of 0.3.
arXiv Detail & Related papers (2020-04-11T05:29:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.