CNN based Road User Detection using the 3D Radar Cube
- URL: http://arxiv.org/abs/2004.12165v2
- Date: Thu, 16 Jul 2020 10:06:15 GMT
- Title: CNN based Road User Detection using the 3D Radar Cube
- Authors: Andras Palffy, Jiaao Dong, Julian F. P. Kooij and Dariu M. Gavrila
- Abstract summary: We present a novel radar based, single-frame, multi-class detection method for moving road users (pedestrian, cyclist, car)
The method provides class information both on the radar target- and object-level.
In experiments on a real-life dataset we demonstrate that our method outperforms the state-of-the-art methods both target- and object-wise.
- Score: 6.576173998482649
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This letter presents a novel radar based, single-frame, multi-class detection
method for moving road users (pedestrian, cyclist, car), which utilizes
low-level radar cube data. The method provides class information both on the
radar target- and object-level. Radar targets are classified individually after
extending the target features with a cropped block of the 3D radar cube around
their positions, thereby capturing the motion of moving parts in the local
velocity distribution. A Convolutional Neural Network (CNN) is proposed for
this classification step. Afterwards, object proposals are generated with a
clustering step, which not only considers the radar targets' positions and
velocities, but their calculated class scores as well. In experiments on a
real-life dataset we demonstrate that our method outperforms the
state-of-the-art methods both target- and object-wise by reaching an average of
0.70 (baseline: 0.68) target-wise and 0.56 (baseline: 0.48) object-wise F1
score. Furthermore, we examine the importance of the used features in an
ablation study.
Related papers
- RADLER: Radar Object Detection Leveraging Semantic 3D City Models and Self-Supervised Radar-Image Learning [37.577145092561715]
We first introduce a unique dataset, RadarCity, comprising 54K synchronized radar-image pairs and semantic 3D city models.
We propose a novel neural network, RADLER, leveraging the effectiveness of contrastive self-supervised learning (SSL) and semantic 3D city models.
We extensively evaluate RADLER on the collected RadarCity dataset and demonstrate average improvements of 5.46% in mean avarage precision (mAP) and 3.51% in mean avarage recall (mAR) over previous radar object detection methods.
arXiv Detail & Related papers (2025-04-16T15:18:56Z) - SpaRC: Sparse Radar-Camera Fusion for 3D Object Detection [5.36022165180739]
We present SpaRC, a novel Sparse fusion transformer for 3D perception that integrates multi-view image semantics with Radar and Camera point features.
Empirical evaluations on the nuScenes and TruckScenes benchmarks demonstrate that SpaRC significantly outperforms existing dense BEV-based and sparse query-based detectors.
arXiv Detail & Related papers (2024-11-29T17:17:38Z) - SeMoLi: What Moves Together Belongs Together [51.72754014130369]
We tackle semi-supervised object detection based on motion cues.
Recent results suggest that motion-based clustering methods can be used to pseudo-label instances of moving objects.
We re-think this approach and suggest that both, object detection, as well as motion-inspired pseudo-labeling, can be tackled in a data-driven manner.
arXiv Detail & Related papers (2024-02-29T18:54:53Z) - Improving Online Lane Graph Extraction by Object-Lane Clustering [106.71926896061686]
We propose an architecture and loss formulation to improve the accuracy of local lane graph estimates.
The proposed method learns to assign the objects to centerlines by considering the centerlines as cluster centers.
We show that our method can achieve significant performance improvements by using the outputs of existing 3D object detection methods.
arXiv Detail & Related papers (2023-07-20T15:21:28Z) - Bi-LRFusion: Bi-Directional LiDAR-Radar Fusion for 3D Dynamic Object
Detection [78.59426158981108]
We introduce a bi-directional LiDAR-Radar fusion framework, termed Bi-LRFusion, to tackle the challenges and improve 3D detection for dynamic objects.
We conduct extensive experiments on nuScenes and ORR datasets, and show that our Bi-LRFusion achieves state-of-the-art performance for detecting dynamic objects.
arXiv Detail & Related papers (2023-06-02T10:57:41Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - A recurrent CNN for online object detection on raw radar frames [7.074916574419171]
This work presents a new recurrent CNN architecture for online radar object detection.
We propose an end-to-end trainable architecture mixing convolutions and ConvLSTMs to learn dependencies between successive frames.
Our model is causal and requires only the past information encoded in the memory of the ConvLSTMs to detect objects.
arXiv Detail & Related papers (2022-12-21T16:36:36Z) - RaLiBEV: Radar and LiDAR BEV Fusion Learning for Anchor Box Free Object
Detection Systems [13.046347364043594]
In autonomous driving, LiDAR and radar are crucial for environmental perception.
Recent state-of-the-art works reveal that the fusion of radar and LiDAR can lead to robust detection in adverse weather.
We propose a bird's-eye view fusion learning-based anchor box-free object detection system.
arXiv Detail & Related papers (2022-11-11T10:24:42Z) - Radar-Camera Sensor Fusion for Joint Object Detection and Distance
Estimation in Autonomous Vehicles [8.797434238081372]
We present a novel radar-camera sensor fusion framework for accurate object detection and distance estimation in autonomous driving scenarios.
The proposed architecture uses a middle-fusion approach to fuse the radar point clouds and RGB images.
Experiments on the challenging nuScenes dataset show our method outperforms other existing radar-camera fusion methods in the 2D object detection task.
arXiv Detail & Related papers (2020-09-17T17:23:40Z) - Radar-based Dynamic Occupancy Grid Mapping and Object Detection [55.74894405714851]
In recent years, the classical occupancy grid map approach has been extended to dynamic occupancy grid maps.
This paper presents the further development of a previous approach.
The data of multiple radar sensors are fused, and a grid-based object tracking and mapping method is applied.
arXiv Detail & Related papers (2020-08-09T09:26:30Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z) - Probabilistic Oriented Object Detection in Automotive Radar [8.281391209717103]
We propose a deep-learning based algorithm for radar object detection.
We created a new multimodal dataset with 102544 frames of raw radar and synchronized LiDAR data.
Our best performing radar detection model achieves 77.28% AP under oriented IoU of 0.3.
arXiv Detail & Related papers (2020-04-11T05:29:32Z) - RODNet: Radar Object Detection Using Cross-Modal Supervision [34.33920572597379]
Radar is usually more robust than the camera in severe driving scenarios.
Unlike RGB images captured by a camera, semantic information from the radar signals is noticeably difficult to extract.
We propose a deep radar object detection network (RODNet) to effectively detect objects purely from the radar frequency data.
arXiv Detail & Related papers (2020-03-03T22:33:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.