Radar Enlighten the Dark: Enhancing Low-Visibility Perception for
Automated Vehicles with Camera-Radar Fusion
- URL: http://arxiv.org/abs/2305.17318v1
- Date: Sat, 27 May 2023 00:47:39 GMT
- Title: Radar Enlighten the Dark: Enhancing Low-Visibility Perception for
Automated Vehicles with Camera-Radar Fusion
- Authors: Can Cui, Yunsheng Ma, Juanwu Lu and Ziran Wang
- Abstract summary: We propose a novel transformer-based 3D object detection model "REDFormer" to tackle low visibility conditions.
Our model outperforms state-of-the-art (SOTA) models on classification and detection accuracy.
- Score: 8.946655323517094
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sensor fusion is a crucial augmentation technique for improving the accuracy
and reliability of perception systems for automated vehicles under diverse
driving conditions. However, adverse weather and low-light conditions remain
challenging, where sensor performance degrades significantly, exposing vehicle
safety to potential risks. Advanced sensors such as LiDARs can help mitigate
the issue but with extremely high marginal costs. In this paper, we propose a
novel transformer-based 3D object detection model "REDFormer" to tackle low
visibility conditions, exploiting the power of a more practical and
cost-effective solution by leveraging bird's-eye-view camera-radar fusion.
Using the nuScenes dataset with multi-radar point clouds, weather information,
and time-of-day data, our model outperforms state-of-the-art (SOTA) models on
classification and detection accuracy. Finally, we provide extensive ablation
studies of each model component on their contributions to address the
above-mentioned challenges. Particularly, it is shown in the experiments that
our model achieves a significant performance improvement over the baseline
model in low-visibility scenarios, specifically exhibiting a 31.31% increase in
rainy scenes and a 46.99% enhancement in nighttime scenes.The source code of
this study is publicly available.
Related papers
- RobuRCDet: Enhancing Robustness of Radar-Camera Fusion in Bird's Eye View for 3D Object Detection [68.99784784185019]
Poor lighting or adverse weather conditions degrade camera performance.
Radar suffers from noise and positional ambiguity.
We propose RobuRCDet, a robust object detection model in BEV.
arXiv Detail & Related papers (2025-02-18T17:17:38Z) - Enhancing autonomous vehicle safety in rain: a data-centric approach for clear vision [0.0]
We developed a vision model that processes live vehicle camera feeds to eliminate rain-induced visual hindrances.
We employed a classic encoder-decoder architecture with skip connections and concatenation operations.
The results demonstrated notable improvements in steering accuracy, underscoring the model's potential to enhance navigation safety and reliability in rainy weather conditions.
arXiv Detail & Related papers (2024-12-29T20:27:12Z) - Real-Time Multi-Scene Visibility Enhancement for Promoting Navigational Safety of Vessels Under Complex Weather Conditions [48.529493393948435]
The visible-light camera has emerged as an essential imaging sensor for marine surface vessels in intelligent waterborne transportation systems.
The visual imaging quality inevitably suffers from several kinds of degradations under complex weather conditions.
We develop a general-purpose multi-scene visibility enhancement method to restore degraded images captured under different weather conditions.
arXiv Detail & Related papers (2024-09-02T23:46:27Z) - ContextualFusion: Context-Based Multi-Sensor Fusion for 3D Object Detection in Adverse Operating Conditions [1.7537812081430004]
We propose a technique called ContextualFusion to incorporate the domain knowledge about cameras and lidars behaving differently across lighting and weather variations into 3D object detection models.
Our approach yields an mAP improvement of 6.2% over state-of-the-art methods on our context-balanced synthetic dataset.
Our method enhances state-of-the-art 3D objection performance at night on the real-world NuScenes dataset with a significant mAP improvement of 11.7%.
arXiv Detail & Related papers (2024-04-23T06:37:54Z) - Better Monocular 3D Detectors with LiDAR from the Past [64.6759926054061]
Camera-based 3D detectors often suffer inferior performance compared to LiDAR-based counterparts due to inherent depth ambiguities in images.
In this work, we seek to improve monocular 3D detectors by leveraging unlabeled historical LiDAR data.
We show consistent and significant performance gain across multiple state-of-the-art models and datasets with a negligible additional latency of 9.66 ms and a small storage cost.
arXiv Detail & Related papers (2024-04-08T01:38:43Z) - NiteDR: Nighttime Image De-Raining with Cross-View Sensor Cooperative Learning for Dynamic Driving Scenes [49.92839157944134]
In nighttime driving scenes, insufficient and uneven lighting shrouds the scenes in darkness, resulting degradation of image quality and visibility.
We develop an image de-raining framework tailored for rainy nighttime driving scenes.
It aims to remove rain artifacts, enrich scene representation, and restore useful information.
arXiv Detail & Related papers (2024-02-28T09:02:33Z) - Multi-Attention Fusion Drowsy Driving Detection Model [1.2043574473965317]
We introduce a novel approach called the Multi-Attention Fusion Drowsy Driving Detection Model (MAF)
Our proposed model achieves an impressive driver drowsiness detection accuracy of 96.8%.
arXiv Detail & Related papers (2023-12-28T14:53:32Z) - RadSegNet: A Reliable Approach to Radar Camera Fusion [7.407841890626661]
Camera-radar fusion systems provide a unique opportunity for all weather reliable high quality perception.
We propose a new method, RadSegNet, that uses a new design philosophy of independent information extraction.
When compared to state-of-the-art methods, RadSegNet achieves a 27% improvement on Astyx and 41.46% increase on RADIATE.
arXiv Detail & Related papers (2022-08-08T00:09:16Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of
Adverse Weather Conditions for 3D Object Detection [60.89616629421904]
Lidar-based object detectors are critical parts of the 3D perception pipeline in autonomous navigation systems such as self-driving cars.
They are sensitive to adverse weather conditions such as rain, snow and fog due to reduced signal-to-noise ratio (SNR) and signal-to-background ratio (SBR)
arXiv Detail & Related papers (2021-07-14T21:10:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.