Off-the-shelf sensor vs. experimental radar -- How much resolution is
necessary in automotive radar classification?
- URL: http://arxiv.org/abs/2006.05485v1
- Date: Tue, 9 Jun 2020 19:51:34 GMT
- Title: Off-the-shelf sensor vs. experimental radar -- How much resolution is
necessary in automotive radar classification?
- Authors: Nicolas Scheiner, Ole Schumann, Florian Kraus, Nils Appenrodt,
J\"urgen Dickmann, Bernhard Sick
- Abstract summary: The resolution of conventional automotive radar sensors results in a sparse data representation.
A new sensor generation is waiting in the wings for its application in this challenging field.
Two sensors of different radar generations are evaluated against each other.
- Score: 5.452955349285637
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Radar-based road user detection is an important topic in the context of
autonomous driving applications. The resolution of conventional automotive
radar sensors results in a sparse data representation which is tough to refine
during subsequent signal processing. On the other hand, a new sensor generation
is waiting in the wings for its application in this challenging field. In this
article, two sensors of different radar generations are evaluated against each
other. The evaluation criterion is the performance on moving road user object
detection and classification tasks. To this end, two data sets originating from
an off-the-shelf radar and a high resolution next generation radar are
compared. Special attention is given on how the two data sets are assembled in
order to make them comparable. The utilized object detector consists of a
clustering algorithm, a feature extraction module, and a recurrent neural
network ensemble for classification. For the assessment, all components are
evaluated both individually and, for the first time, as a whole. This allows
for indicating where overall performance improvements have their origin in the
pipeline. Furthermore, the generalization capabilities of both data sets are
evaluated and important comparison metrics for automotive radar object
detection are discussed. Results show clear benefits of the next generation
radar. Interestingly, those benefits do not actually occur due to better
performance at the classification stage, but rather because of the vast
improvements at the clustering stage.
Related papers
- RadarGen: Automotive Radar Point Cloud Generation from Cameras [64.69976771710057]
We present RadarGen, a diffusion model for synthesizing realistic automotive radar point clouds from multi-view camera imagery.<n>RadarGen adapts efficient image-latent diffusion to the radar domain by representing radar measurements in bird's-eye-view form.<n>We show that RadarGen captures characteristic radar measurement distributions and reduces the gap to perception models trained on real data.
arXiv Detail & Related papers (2025-12-19T18:57:33Z) - Redefining Radar Segmentation: Simultaneous Static-Moving Segmentation and Ego-Motion Estimation using Radar Point Clouds [42.08401139629074]
This study proposes a neural network based solution that can simultaneously segment static and moving objects from radar point clouds.<n>The measured radial velocity of static objects is correlated with the motion of the radar.<n>It can also estimate the instantaneous 2D velocity of the moving platform or vehicle.
arXiv Detail & Related papers (2025-11-25T07:13:34Z) - The Radar Ghost Dataset -- An Evaluation of Ghost Objects in Automotive Radar Data [12.653873936535149]
A lot more surfaces in a typical traffic scenario appear flat relative to the radar's emitted signal.
This results in multi-path reflections or so called ghost detections in the radar signal.
We present a dataset with detailed manual annotations for different kinds of ghost detections.
arXiv Detail & Related papers (2024-04-01T19:20:32Z) - Exploring Radar Data Representations in Autonomous Driving: A Comprehensive Review [9.68427762815025]
Review focuses on exploring different radar data representations utilized in autonomous driving systems.
We introduce the capabilities and limitations of the radar sensor.
For each radar representation, we examine the related datasets, methods, advantages and limitations.
arXiv Detail & Related papers (2023-12-08T06:31:19Z) - Radar-Lidar Fusion for Object Detection by Designing Effective
Convolution Networks [18.17057711053028]
We propose a dual-branch framework to integrate radar and Lidar data for enhanced object detection.
The results show that it surpasses state-of-the-art methods by $1.89%$ and $2.61%$ in favorable and adverse weather conditions.
arXiv Detail & Related papers (2023-10-30T10:18:40Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - Bi-LRFusion: Bi-Directional LiDAR-Radar Fusion for 3D Dynamic Object
Detection [78.59426158981108]
We introduce a bi-directional LiDAR-Radar fusion framework, termed Bi-LRFusion, to tackle the challenges and improve 3D detection for dynamic objects.
We conduct extensive experiments on nuScenes and ORR datasets, and show that our Bi-LRFusion achieves state-of-the-art performance for detecting dynamic objects.
arXiv Detail & Related papers (2023-06-02T10:57:41Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - Deep Instance Segmentation with High-Resolution Automotive Radar [2.167586397005864]
We propose two efficient methods for instance segmentation with radar detection points.
One is implemented in an end-to-end deep learning driven fashion using PointNet++ framework.
The other is based on clustering of the radar detection points with semantic information.
arXiv Detail & Related papers (2021-10-05T01:18:27Z) - Complex-valued Convolutional Neural Networks for Enhanced Radar Signal
Denoising and Interference Mitigation [73.0103413636673]
We propose the use of Complex-Valued Convolutional Neural Networks (CVCNNs) to address the issue of mutual interference between radar sensors.
CVCNNs increase data efficiency, speeds up network training and substantially improves the conservation of phase information during interference removal.
arXiv Detail & Related papers (2021-04-29T10:06:29Z) - Multi-View Radar Semantic Segmentation [3.2093811507874768]
Automotive radars are low-cost active sensors that measure properties of surrounding objects.
They are seldom used for scene understanding due to the size and complexity of radar raw data.
We propose several novel architectures, and their associated losses, which analyse multiple "views" of the range-angle-Doppler radar tensor to segment it semantically.
arXiv Detail & Related papers (2021-03-30T09:56:41Z) - LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar
Fusion [52.59664614744447]
We present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous velocity measurements.
arXiv Detail & Related papers (2020-10-02T00:13:00Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.