AGSENet: A Robust Road Ponding Detection Method for Proactive Traffic Safety
- URL: http://arxiv.org/abs/2410.16999v1
- Date: Tue, 22 Oct 2024 13:21:36 GMT
- Title: AGSENet: A Robust Road Ponding Detection Method for Proactive Traffic Safety
- Authors: Ronghui Zhang, Shangyu Yang, Dakang Lyu, Zihan Wang, Junzhou Chen, Yilong Ren, Bolin Gao, Zhihan Lv,
- Abstract summary: Road ponding poses a serious threat to road safety by causing vehicles to lose control and leading to accidents ranging from minor fender benders to severe collisions.
Existing technologies struggle to accurately identify road ponding due to complex road textures and variable ponding coloration influenced by reflection characteristics.
We propose a novel approach called Self-Attention-based Global Saliency-Enhanced Network (AGSENet) for proactive road ponding detection and traffic safety improvement.
- Score: 30.305692955291033
- License:
- Abstract: Road ponding, a prevalent traffic hazard, poses a serious threat to road safety by causing vehicles to lose control and leading to accidents ranging from minor fender benders to severe collisions. Existing technologies struggle to accurately identify road ponding due to complex road textures and variable ponding coloration influenced by reflection characteristics. To address this challenge, we propose a novel approach called Self-Attention-based Global Saliency-Enhanced Network (AGSENet) for proactive road ponding detection and traffic safety improvement. AGSENet incorporates saliency detection techniques through the Channel Saliency Information Focus (CSIF) and Spatial Saliency Information Enhancement (SSIE) modules. The CSIF module, integrated into the encoder, employs self-attention to highlight similar features by fusing spatial and channel information. The SSIE module, embedded in the decoder, refines edge features and reduces noise by leveraging correlations across different feature levels. To ensure accurate and reliable evaluation, we corrected significant mislabeling and missing annotations in the Puddle-1000 dataset. Additionally, we constructed the Foggy-Puddle and Night-Puddle datasets for road ponding detection in low-light and foggy conditions, respectively. Experimental results demonstrate that AGSENet outperforms existing methods, achieving IoU improvements of 2.03\%, 0.62\%, and 1.06\% on the Puddle-1000, Foggy-Puddle, and Night-Puddle datasets, respectively, setting a new state-of-the-art in this field. Finally, we verified the algorithm's reliability on edge computing devices. This work provides a valuable reference for proactive warning research in road traffic safety.
Related papers
- An Efficient Approach to Generate Safe Drivable Space by LiDAR-Camera-HDmap Fusion [13.451123257796972]
We propose an accurate and robust perception module for Autonomous Vehicles (AVs) for drivable space extraction.
Our work introduces a robust easy-to-generalize perception module that leverages LiDAR, camera, and HD map data fusion.
Our approach is tested on a real dataset and its reliability is verified during the daily (including harsh snowy weather) operation of our autonomous shuttle, WATonoBus.
arXiv Detail & Related papers (2024-10-29T17:54:02Z) - Annotation-Free Curb Detection Leveraging Altitude Difference Image [9.799565515089617]
Road curbs are essential for ensuring the safety of autonomous vehicles.
Current methods for detecting curbs rely on camera imagery or LiDAR point clouds.
This work proposes an annotation-free curb detection method leveraging Altitude Difference Image (ADI)
arXiv Detail & Related papers (2024-09-30T10:29:41Z) - LOID: Lane Occlusion Inpainting and Detection for Enhanced Autonomous Driving Systems [0.0]
We propose two innovative approaches to enhance lane detection in challenging environments.
The first approach aug-Segment improves conventional lane detection models by augmenting the training dataset of CULanes.
The second approach, LOID Lane Occlusion Inpainting and Detection, uses inpainting models to reconstruct the road environment in the occluded areas.
arXiv Detail & Related papers (2024-08-17T06:55:40Z) - Lazy Layers to Make Fine-Tuned Diffusion Models More Traceable [70.77600345240867]
A novel arbitrary-in-arbitrary-out (AIAO) strategy makes watermarks resilient to fine-tuning-based removal.
Unlike the existing methods of designing a backdoor for the input/output space of diffusion models, in our method, we propose to embed the backdoor into the feature space of sampled subpaths.
Our empirical studies on the MS-COCO, AFHQ, LSUN, CUB-200, and DreamBooth datasets confirm the robustness of AIAO.
arXiv Detail & Related papers (2024-05-01T12:03:39Z) - OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - DARTH: Holistic Test-time Adaptation for Multiple Object Tracking [87.72019733473562]
Multiple object tracking (MOT) is a fundamental component of perception systems for autonomous driving.
Despite the urge of safety in driving systems, no solution to the MOT adaptation problem to domain shift in test-time conditions has ever been proposed.
We introduce DARTH, a holistic test-time adaptation framework for MOT.
arXiv Detail & Related papers (2023-10-03T10:10:42Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - RaLiBEV: Radar and LiDAR BEV Fusion Learning for Anchor Box Free Object
Detection Systems [13.046347364043594]
In autonomous driving, LiDAR and radar are crucial for environmental perception.
Recent state-of-the-art works reveal that the fusion of radar and LiDAR can lead to robust detection in adverse weather.
We propose a bird's-eye view fusion learning-based anchor box-free object detection system.
arXiv Detail & Related papers (2022-11-11T10:24:42Z) - Perspective Aware Road Obstacle Detection [104.57322421897769]
We show that road obstacle detection techniques ignore the fact that, in practice, the apparent size of the obstacles decreases as their distance to the vehicle increases.
We leverage this by computing a scale map encoding the apparent size of a hypothetical object at every image location.
We then leverage this perspective map to generate training data by injecting onto the road synthetic objects whose size corresponds to the perspective foreshortening.
arXiv Detail & Related papers (2022-10-04T17:48:42Z) - Exploiting Playbacks in Unsupervised Domain Adaptation for 3D Object
Detection [55.12894776039135]
State-of-the-art 3D object detectors, based on deep learning, have shown promising accuracy but are prone to over-fit to domain idiosyncrasies.
We propose a novel learning approach that drastically reduces this gap by fine-tuning the detector on pseudo-labels in the target domain.
We show, on five autonomous driving datasets, that fine-tuning the detector on these pseudo-labels substantially reduces the domain gap to new driving environments.
arXiv Detail & Related papers (2021-03-26T01:18:11Z) - Channel Boosting Feature Ensemble for Radar-based Object Detection [6.810856082577402]
Radar-based object detection is explored provides a counterpart sensor modality to be deployed and used in adverse weather conditions.
The proposed method's efficacy is extensively evaluated using the COCO evaluation metric.
arXiv Detail & Related papers (2021-01-10T12:20:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.