RADDet: Range-Azimuth-Doppler based Radar Object Detection for Dynamic
Road Users
- URL: http://arxiv.org/abs/2105.00363v1
- Date: Sun, 2 May 2021 00:25:11 GMT
- Title: RADDet: Range-Azimuth-Doppler based Radar Object Detection for Dynamic
Road Users
- Authors: Ao Zhang, Farzan Erlik Nowruzi, Robert Laganiere
- Abstract summary: We collect a novel radar dataset that contains radar data in the form of Range-Azimuth-Doppler tensors.
To build the dataset, we propose an instance-wise auto-annotation method.
A novel Range-Azimuth-Doppler based multi-class object detection deep learning model is proposed.
- Score: 6.61211659120882
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Object detection using automotive radars has not been explored with deep
learning models in comparison to the camera based approaches. This can be
attributed to the lack of public radar datasets. In this paper, we collect a
novel radar dataset that contains radar data in the form of
Range-Azimuth-Doppler tensors along with the bounding boxes on the tensor for
dynamic road users, category labels, and 2D bounding boxes on the Cartesian
Bird-Eye-View range map. To build the dataset, we propose an instance-wise
auto-annotation method. Furthermore, a novel Range-Azimuth-Doppler based
multi-class object detection deep learning model is proposed. The algorithm is
a one-stage anchor-based detector that generates both 3D bounding boxes and 2D
bounding boxes on Range-Azimuth-Doppler and Cartesian domains, respectively.
Our proposed algorithm achieves 56.3% AP with IOU of 0.3 on 3D bounding box
predictions, and 51.6% with IOU of 0.5 on 2D bounding box prediction. Our
dataset and the code can be found at
https://github.com/ZhangAoCanada/RADDet.git.
Related papers
- RICCARDO: Radar Hit Prediction and Convolution for Camera-Radar 3D Object Detection [16.872776956141195]
We build a model to predict radar hit distributions conditioned on object properties obtained from a monocular detector.
We use the predicted distribution as a kernel to match actual measured radar points in the neighborhood of the monocular detections.
Our method achieves the state-of-the-art radar-camera detection performance on nuScenes.
arXiv Detail & Related papers (2025-04-12T05:37:42Z) - RobuRCDet: Enhancing Robustness of Radar-Camera Fusion in Bird's Eye View for 3D Object Detection [68.99784784185019]
Poor lighting or adverse weather conditions degrade camera performance.
Radar suffers from noise and positional ambiguity.
We propose RobuRCDet, a robust object detection model in BEV.
arXiv Detail & Related papers (2025-02-18T17:17:38Z) - GET-UP: GEomeTric-aware Depth Estimation with Radar Points UPsampling [7.90238039959534]
Existing algorithms process radar data by projecting 3D points onto the image plane for pixel-level feature extraction.
We propose GET-UP, leveraging attention-enhanced Graph Neural Networks (GNN) to exchange and aggregate both 2D and 3D information from radar data.
We benchmark our proposed GET-UP on the nuScenes dataset, achieving state-of-the-art performance with a 15.3% and 14.7% improvement in MAE and RMSE over the previously best-performing model.
arXiv Detail & Related papers (2024-09-02T14:15:09Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - Bi-LRFusion: Bi-Directional LiDAR-Radar Fusion for 3D Dynamic Object
Detection [78.59426158981108]
We introduce a bi-directional LiDAR-Radar fusion framework, termed Bi-LRFusion, to tackle the challenges and improve 3D detection for dynamic objects.
We conduct extensive experiments on nuScenes and ORR datasets, and show that our Bi-LRFusion achieves state-of-the-art performance for detecting dynamic objects.
arXiv Detail & Related papers (2023-06-02T10:57:41Z) - Fully Sparse Fusion for 3D Object Detection [69.32694845027927]
Currently prevalent multimodal 3D detection methods are built upon LiDAR-based detectors that usually use dense Bird's-Eye-View feature maps.
Fully sparse architecture is gaining attention as they are highly efficient in long-range perception.
In this paper, we study how to effectively leverage image modality in the emerging fully sparse architecture.
arXiv Detail & Related papers (2023-04-24T17:57:43Z) - Fully Sparse 3D Object Detection [57.05834683261658]
We build a fully sparse 3D object detector (FSD) for long-range LiDAR-based object detection.
FSD is built upon the general sparse voxel encoder and a novel sparse instance recognition (SIR) module.
SIR avoids the time-consuming neighbor queries in previous point-based methods by grouping points into instances.
arXiv Detail & Related papers (2022-07-20T17:01:33Z) - FGR: Frustum-Aware Geometric Reasoning for Weakly Supervised 3D Vehicle
Detection [81.79171905308827]
We propose frustum-aware geometric reasoning (FGR) to detect vehicles in point clouds without any 3D annotations.
Our method consists of two stages: coarse 3D segmentation and 3D bounding box estimation.
It is able to accurately detect objects in 3D space with only 2D bounding boxes and sparse point clouds.
arXiv Detail & Related papers (2021-05-17T07:29:55Z) - CNN based Road User Detection using the 3D Radar Cube [6.576173998482649]
We present a novel radar based, single-frame, multi-class detection method for moving road users (pedestrian, cyclist, car)
The method provides class information both on the radar target- and object-level.
In experiments on a real-life dataset we demonstrate that our method outperforms the state-of-the-art methods both target- and object-wise.
arXiv Detail & Related papers (2020-04-25T15:07:03Z) - Probabilistic Oriented Object Detection in Automotive Radar [8.281391209717103]
We propose a deep-learning based algorithm for radar object detection.
We created a new multimodal dataset with 102544 frames of raw radar and synchronized LiDAR data.
Our best performing radar detection model achieves 77.28% AP under oriented IoU of 0.3.
arXiv Detail & Related papers (2020-04-11T05:29:32Z) - DOPS: Learning to Detect 3D Objects and Predict their 3D Shapes [54.239416488865565]
We propose a fast single-stage 3D object detection method for LIDAR data.
The core novelty of our method is a fast, single-pass architecture that both detects objects in 3D and estimates their shapes.
We find that our proposed method achieves state-of-the-art results by 5% on object detection in ScanNet scenes, and it gets top results by 3.4% in the Open dataset.
arXiv Detail & Related papers (2020-04-02T17:48:50Z) - RODNet: Radar Object Detection Using Cross-Modal Supervision [34.33920572597379]
Radar is usually more robust than the camera in severe driving scenarios.
Unlike RGB images captured by a camera, semantic information from the radar signals is noticeably difficult to extract.
We propose a deep radar object detection network (RODNet) to effectively detect objects purely from the radar frequency data.
arXiv Detail & Related papers (2020-03-03T22:33:16Z) - Deep Learning on Radar Centric 3D Object Detection [4.822598110892847]
We introduce a deep learning approach to 3D object detection with radar only.
To overcome the lack of radar labeled data, we propose a novel way of making use of abundant LiDAR data.
arXiv Detail & Related papers (2020-02-27T10:16:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.