Radar+RGB Attentive Fusion for Robust Object Detection in Autonomous
Vehicles
- URL: http://arxiv.org/abs/2008.13642v1
- Date: Mon, 31 Aug 2020 14:27:02 GMT
- Title: Radar+RGB Attentive Fusion for Robust Object Detection in Autonomous
Vehicles
- Authors: Ritu Yadav, Axel Vierling, Karsten Berns
- Abstract summary: The proposed architecture aims to use radar signal data along with RGB camera images to form a robust detection network.
BIRANet yields 72.3/75.3% average AP/AR on the NuScenes dataset.
RANet gives 69.6/71.9% average AP/AR on the same dataset, which is reasonably acceptable performance.
- Score: 0.5801044612920815
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents two variations of architecture referred to as RANet and
BIRANet. The proposed architecture aims to use radar signal data along with RGB
camera images to form a robust detection network that works efficiently, even
in variable lighting and weather conditions such as rain, dust, fog, and
others. First, radar information is fused in the feature extractor network.
Second, radar points are used to generate guided anchors. Third, a method is
proposed to improve region proposal network targets. BIRANet yields 72.3/75.3%
average AP/AR on the NuScenes dataset, which is better than the performance of
our base network Faster-RCNN with Feature pyramid network(FFPN). RANet gives
69.6/71.9% average AP/AR on the same dataset, which is reasonably acceptable
performance. Also, both BIRANet and RANet are evaluated to be robust towards
the noise.
Related papers
- CaFNet: A Confidence-Driven Framework for Radar Camera Depth Estimation [6.9404362058736995]
This paper introduces a two-stage, end-to-end trainable Confidence-aware Fusion Net (CaFNet) for dense depth estimation.
The first stage addresses radar-specific challenges, such as ambiguous elevation and noisy measurements.
For the final depth estimation, we innovate a confidence-aware gated fusion mechanism to integrate radar and image features effectively.
arXiv Detail & Related papers (2024-06-30T13:39:29Z) - Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - StreakNet-Arch: An Anti-scattering Network-based Architecture for Underwater Carrier LiDAR-Radar Imaging [48.30281861646519]
We introduce StreakNet-Arch, a novel signal processing architecture designed for Underwater Carrier LiDAR-Radar (UCLR) imaging systems.
StreakNet-Arch formulates the signal processing as a real-time, end-to-end binary classification task.
We present a method for embedding streak-tube camera images into attention networks, effectively acting as a learned bandpass filter.
arXiv Detail & Related papers (2024-04-14T06:19:46Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - Bi-LRFusion: Bi-Directional LiDAR-Radar Fusion for 3D Dynamic Object
Detection [78.59426158981108]
We introduce a bi-directional LiDAR-Radar fusion framework, termed Bi-LRFusion, to tackle the challenges and improve 3D detection for dynamic objects.
We conduct extensive experiments on nuScenes and ORR datasets, and show that our Bi-LRFusion achieves state-of-the-art performance for detecting dynamic objects.
arXiv Detail & Related papers (2023-06-02T10:57:41Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - Automotive RADAR sub-sampling via object detection networks: Leveraging
prior signal information [18.462990836437626]
Automotive radar has increasingly attracted attention due to growing interest in autonomous driving technologies.
We present a novel adaptive radar sub-sampling algorithm designed to identify regions that require more detailed/accurate reconstruction based on prior environmental conditions' knowledge.
arXiv Detail & Related papers (2023-02-21T05:32:28Z) - RODNet: A Real-Time Radar Object Detection Network Cross-Supervised by
Camera-Radar Fused Object 3D Localization [30.42848269877982]
We propose a deep radar object detection network, named RODNet, which is cross-supervised by a camera-radar fused algorithm.
Our proposed RODNet takes a sequence of RF images as the input to predict the likelihood of objects in the radar field of view (FoV)
With intensive experiments, our proposed cross-supervised RODNet achieves 86% average precision and 88% average recall of object detection performance.
arXiv Detail & Related papers (2021-02-09T22:01:55Z) - Radar-Camera Sensor Fusion for Joint Object Detection and Distance
Estimation in Autonomous Vehicles [8.797434238081372]
We present a novel radar-camera sensor fusion framework for accurate object detection and distance estimation in autonomous driving scenarios.
The proposed architecture uses a middle-fusion approach to fuse the radar point clouds and RGB images.
Experiments on the challenging nuScenes dataset show our method outperforms other existing radar-camera fusion methods in the 2D object detection task.
arXiv Detail & Related papers (2020-09-17T17:23:40Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z) - RODNet: Radar Object Detection Using Cross-Modal Supervision [34.33920572597379]
Radar is usually more robust than the camera in severe driving scenarios.
Unlike RGB images captured by a camera, semantic information from the radar signals is noticeably difficult to extract.
We propose a deep radar object detection network (RODNet) to effectively detect objects purely from the radar frequency data.
arXiv Detail & Related papers (2020-03-03T22:33:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.