DMRVisNet: Deep Multi-head Regression Network for Pixel-wise Visibility
Estimation Under Foggy Weather
- URL: http://arxiv.org/abs/2112.04278v1
- Date: Wed, 8 Dec 2021 13:31:07 GMT
- Title: DMRVisNet: Deep Multi-head Regression Network for Pixel-wise Visibility
Estimation Under Foggy Weather
- Authors: Jing You, Shaocheng Jia, Xin Pei, and Danya Yao
- Abstract summary: Fog, as a kind of common weather, frequently appears in the real world, especially in the mountain areas.
Current methods use professional instruments outfitted at fixed locations on the roads to perform the visibility measurement.
We propose an innovative end-to-end convolutional neural network framework to estimate the visibility.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Scene perception is essential for driving decision-making and traffic safety.
However, fog, as a kind of common weather, frequently appears in the real
world, especially in the mountain areas, making it difficult to accurately
observe the surrounding environments. Therefore, precisely estimating the
visibility under foggy weather can significantly benefit traffic management and
safety. To address this, most current methods use professional instruments
outfitted at fixed locations on the roads to perform the visibility
measurement; these methods are expensive and less flexible. In this paper, we
propose an innovative end-to-end convolutional neural network framework to
estimate the visibility leveraging Koschmieder's law exclusively using the
image data. The proposed method estimates the visibility by integrating the
physical model into the proposed framework, instead of directly predicting the
visibility value via the convolutional neural work. Moreover, we estimate the
visibility as a pixel-wise visibility map against those of previous visibility
measurement methods which solely predict a single value for an entire image.
Thus, the estimated result of our method is more informative, particularly in
uneven fog scenarios, which can benefit to developing a more precise early
warning system for foggy weather, thereby better protecting the intelligent
transportation infrastructure systems and promoting its development. To
validate the proposed framework, a virtual dataset, FACI, containing 3,000
foggy images in different concentrations, is collected using the AirSim
platform. Detailed experiments show that the proposed method achieves
performance competitive to those of state-of-the-art methods.
Related papers
- Vision-Driven 2D Supervised Fine-Tuning Framework for Bird's Eye View Perception [20.875243604623723]
We propose a fine-tuning method for BEV perception network based on visual 2D semantic perception.
Considering the maturity and development of 2D perception technologies, our method significantly reduces the dependency on high-cost LiDAR ground truths.
arXiv Detail & Related papers (2024-09-09T17:40:30Z) - Real-Time Multi-Scene Visibility Enhancement for Promoting Navigational Safety of Vessels Under Complex Weather Conditions [48.529493393948435]
The visible-light camera has emerged as an essential imaging sensor for marine surface vessels in intelligent waterborne transportation systems.
The visual imaging quality inevitably suffers from several kinds of degradations under complex weather conditions.
We develop a general-purpose multi-scene visibility enhancement method to restore degraded images captured under different weather conditions.
arXiv Detail & Related papers (2024-09-02T23:46:27Z) - OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - Spatial-temporal Vehicle Re-identification [3.7748602100709534]
We propose a spatial-temporal vehicle ReID framework that estimates reliable camera network topology.
Based on the proposed methods, we performed superior performance on the public dataset (VeRi776) by 99.64% of rank-1 accuracy.
arXiv Detail & Related papers (2023-09-03T13:07:38Z) - View Consistent Purification for Accurate Cross-View Localization [59.48131378244399]
This paper proposes a fine-grained self-localization method for outdoor robotics.
The proposed method addresses limitations in existing cross-view localization methods.
It is the first sparse visual-only method that enhances perception in dynamic environments.
arXiv Detail & Related papers (2023-08-16T02:51:52Z) - Street-View Image Generation from a Bird's-Eye View Layout [95.36869800896335]
Bird's-Eye View (BEV) Perception has received increasing attention in recent years.
Data-driven simulation for autonomous driving has been a focal point of recent research.
We propose BEVGen, a conditional generative model that synthesizes realistic and spatially consistent surrounding images.
arXiv Detail & Related papers (2023-01-11T18:39:34Z) - Monocular BEV Perception of Road Scenes via Front-to-Top View Projection [57.19891435386843]
We present a novel framework that reconstructs a local map formed by road layout and vehicle occupancy in the bird's-eye view.
Our model runs at 25 FPS on a single GPU, which is efficient and applicable for real-time panorama HD map reconstruction.
arXiv Detail & Related papers (2022-11-15T13:52:41Z) - Sensor Visibility Estimation: Metrics and Methods for Systematic
Performance Evaluation and Improvement [0.0]
We introduce metrics and a framework to assess the performance of visibility estimators.
Our metrics are verified with labeled real-world and simulation data from infrastructure radars and cameras.
Applying our metrics, we enhance the radar and camera visibility estimators by modeling the 3D elevation of sensor and objects.
arXiv Detail & Related papers (2022-11-11T16:17:43Z) - Vision-Cloud Data Fusion for ADAS: A Lane Change Prediction Case Study [38.65843674620544]
We introduce a novel vision-cloud data fusion methodology, integrating camera image and Digital Twin information from the cloud to help intelligent vehicles make better decisions.
A case study on lane change prediction is conducted to show the effectiveness of the proposed data fusion methodology.
arXiv Detail & Related papers (2021-12-07T23:42:21Z) - Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of
Adverse Weather Conditions for 3D Object Detection [60.89616629421904]
Lidar-based object detectors are critical parts of the 3D perception pipeline in autonomous navigation systems such as self-driving cars.
They are sensitive to adverse weather conditions such as rain, snow and fog due to reduced signal-to-noise ratio (SNR) and signal-to-background ratio (SBR)
arXiv Detail & Related papers (2021-07-14T21:10:47Z) - RV-FuseNet: Range View Based Fusion of Time-Series LiDAR Data for Joint
3D Object Detection and Motion Forecasting [13.544498422625448]
We present RV-FuseNet, a novel end-to-end approach for joint detection and trajectory estimation.
Instead of the widely used bird's eye view (BEV) representation, we utilize the native range view (RV) representation of LiDAR data.
We show that our approach significantly improves motion forecasting performance over the existing state-of-the-art.
arXiv Detail & Related papers (2020-05-21T19:22:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.