RadSegNet: A Reliable Approach to Radar Camera Fusion
- URL: http://arxiv.org/abs/2208.03849v1
- Date: Mon, 8 Aug 2022 00:09:16 GMT
- Title: RadSegNet: A Reliable Approach to Radar Camera Fusion
- Authors: Kshitiz Bansal, Keshav Rungta and Dinesh Bharadia
- Abstract summary: Camera-radar fusion systems provide a unique opportunity for all weather reliable high quality perception.
We propose a new method, RadSegNet, that uses a new design philosophy of independent information extraction.
When compared to state-of-the-art methods, RadSegNet achieves a 27% improvement on Astyx and 41.46% increase on RADIATE.
- Score: 7.407841890626661
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Perception systems for autonomous driving have seen significant advancements
in their performance over last few years. However, these systems struggle to
show robustness in extreme weather conditions because sensors like lidars and
cameras, which are the primary sensors in a sensor suite, see a decline in
performance under these conditions. In order to solve this problem,
camera-radar fusion systems provide a unique opportunity for all weather
reliable high quality perception. Cameras provides rich semantic information
while radars can work through occlusions and in all weather conditions. In this
work, we show that the state-of-the-art fusion methods perform poorly when
camera input is degraded, which essentially results in losing the all-weather
reliability they set out to achieve. Contrary to these approaches, we propose a
new method, RadSegNet, that uses a new design philosophy of independent
information extraction and truly achieves reliability in all conditions,
including occlusions and adverse weather. We develop and validate our proposed
system on the benchmark Astyx dataset and further verify these results on the
RADIATE dataset. When compared to state-of-the-art methods, RadSegNet achieves
a 27% improvement on Astyx and 41.46% increase on RADIATE, in average precision
score and maintains a significantly better performance in adverse weather
conditions
Related papers
- ContextualFusion: Context-Based Multi-Sensor Fusion for 3D Object Detection in Adverse Operating Conditions [1.7537812081430004]
We propose a technique called ContextualFusion to incorporate the domain knowledge about cameras and lidars behaving differently across lighting and weather variations into 3D object detection models.
Our approach yields an mAP improvement of 6.2% over state-of-the-art methods on our context-balanced synthetic dataset.
Our method enhances state-of-the-art 3D objection performance at night on the real-world NuScenes dataset with a significant mAP improvement of 11.7%.
arXiv Detail & Related papers (2024-04-23T06:37:54Z) - DPFT: Dual Perspective Fusion Transformer for Camera-Radar-based Object Detection [0.7919810878571297]
We propose a novel camera-radar fusion approach called Dual Perspective Fusion Transformer (DPFT)
Our method leverages lower-level radar data (the radar cube) instead of the processed point clouds to preserve as much information as possible.
DPFT has demonstrated state-of-the-art performance on the K-Radar dataset while showing remarkable robustness against adverse weather conditions.
arXiv Detail & Related papers (2024-04-03T18:54:27Z) - ThermRad: A Multi-modal Dataset for Robust 3D Object Detection under
Challenging Conditions [15.925365473140479]
We present a new multi-modal dataset called ThermRad, which includes a 3D LiDAR, a 4D radar, an RGB camera and a thermal camera.
We propose a new multi-modal fusion method called RTDF-RCNN, which leverages the complementary strengths of 4D radars and thermal cameras to boost object detection performance.
Our method achieves significant enhancements in detecting cars, pedestrians, and cyclists, with improvements of over 7.98%, 24.27%, and 27.15%, respectively.
arXiv Detail & Related papers (2023-08-20T04:34:30Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - Radar Enlighten the Dark: Enhancing Low-Visibility Perception for
Automated Vehicles with Camera-Radar Fusion [8.946655323517094]
We propose a novel transformer-based 3D object detection model "REDFormer" to tackle low visibility conditions.
Our model outperforms state-of-the-art (SOTA) models on classification and detection accuracy.
arXiv Detail & Related papers (2023-05-27T00:47:39Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - Vision in adverse weather: Augmentation using CycleGANs with various
object detectors for robust perception in autonomous racing [70.16043883381677]
In autonomous racing, the weather can change abruptly, causing significant degradation in perception, resulting in ineffective manoeuvres.
In order to improve detection in adverse weather, deep-learning-based models typically require extensive datasets captured in such conditions.
We introduce an approach of using synthesised adverse condition datasets in autonomous racing (generated using CycleGAN) to improve the performance of four out of five state-of-the-art detectors.
arXiv Detail & Related papers (2022-01-10T10:02:40Z) - TransWeather: Transformer-based Restoration of Images Degraded by
Adverse Weather Conditions [77.20136060506906]
We propose TransWeather, a transformer-based end-to-end model with just a single encoder and a decoder.
TransWeather achieves significant improvements across multiple test datasets over both All-in-One network.
It is validated on real world test images and found to be more effective than previous methods.
arXiv Detail & Related papers (2021-11-29T18:57:09Z) - Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of
Adverse Weather Conditions for 3D Object Detection [60.89616629421904]
Lidar-based object detectors are critical parts of the 3D perception pipeline in autonomous navigation systems such as self-driving cars.
They are sensitive to adverse weather conditions such as rain, snow and fog due to reduced signal-to-noise ratio (SNR) and signal-to-background ratio (SBR)
arXiv Detail & Related papers (2021-07-14T21:10:47Z) - Complex-valued Convolutional Neural Networks for Enhanced Radar Signal
Denoising and Interference Mitigation [73.0103413636673]
We propose the use of Complex-Valued Convolutional Neural Networks (CVCNNs) to address the issue of mutual interference between radar sensors.
CVCNNs increase data efficiency, speeds up network training and substantially improves the conservation of phase information during interference removal.
arXiv Detail & Related papers (2021-04-29T10:06:29Z) - All-Weather Object Recognition Using Radar and Infrared Sensing [1.7513645771137178]
This thesis explores new sensing developments based on long wave polarised infrared (IR) imagery and imaging radar to recognise objects.
First, we developed a methodology based on Stokes parameters using polarised infrared data to recognise vehicles using deep neural networks.
Second, we explored the potential of using only the power spectrum captured by low-THz radar sensors to perform object recognition in a controlled scenario.
Last, we created a new large-scale dataset in the "wild" with many different weather scenarios showing radar robustness to detect vehicles in adverse weather.
arXiv Detail & Related papers (2020-10-30T14:16:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.