How to deal with glare for improved perception of Autonomous Vehicles
- URL: http://arxiv.org/abs/2404.10992v1
- Date: Wed, 17 Apr 2024 02:05:05 GMT
- Title: How to deal with glare for improved perception of Autonomous Vehicles
- Authors: Muhammad Z. Alam, Zeeshan Kaleem, Sousso Kelouwani,
- Abstract summary: Vision sensors are versatile and can capture a wide range of visual cues, such as color, texture, shape, and depth.
vision-based environment perception systems can be easily affected by glare in the presence of a bright source of light.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vision sensors are versatile and can capture a wide range of visual cues, such as color, texture, shape, and depth. This versatility, along with the relatively inexpensive availability of machine vision cameras, played an important role in adopting vision-based environment perception systems in autonomous vehicles (AVs). However, vision-based perception systems can be easily affected by glare in the presence of a bright source of light, such as the sun or the headlights of the oncoming vehicle at night or simply by light reflecting off snow or ice-covered surfaces; scenarios encountered frequently during driving. In this paper, we investigate various glare reduction techniques, including the proposed saturated pixel-aware glare reduction technique for improved performance of the computer vision (CV) tasks employed by the perception layer of AVs. We evaluate these glare reduction methods based on various performance metrics of the CV algorithms used by the perception layer. Specifically, we considered object detection, object recognition, object tracking, depth estimation, and lane detection which are crucial for autonomous driving. The experimental findings validate the efficacy of the proposed glare reduction approach, showcasing enhanced performance across diverse perception tasks and remarkable resilience against varying levels of glare.
Related papers
- Lane Detection System for Driver Assistance in Vehicles [36.136619420474766]
This work presents the development of a lane detection system aimed at assisting the driving of conventional and autonomous vehicles.
The system was implemented using traditional computer vision techniques, focusing on robustness and efficiency to operate in real-time.
It is concluded that, despite its limitations, the traditional computer vision approach shows significant potential for application in driver assistance systems and autonomous navigation.
arXiv Detail & Related papers (2024-10-05T05:53:29Z) - Low-Light Enhancement Effect on Classification and Detection: An Empirical Study [48.6762437869172]
We evaluate the impact of Low-Light Image Enhancement (LLIE) methods on high-level vision tasks.
Our findings suggest a disconnect between image enhancement for human visual perception and for machine analysis.
This insight is crucial for the development of LLIE techniques that align with the needs of both human and machine vision.
arXiv Detail & Related papers (2024-09-22T14:21:31Z) - NiteDR: Nighttime Image De-Raining with Cross-View Sensor Cooperative Learning for Dynamic Driving Scenes [49.92839157944134]
In nighttime driving scenes, insufficient and uneven lighting shrouds the scenes in darkness, resulting degradation of image quality and visibility.
We develop an image de-raining framework tailored for rainy nighttime driving scenes.
It aims to remove rain artifacts, enrich scene representation, and restore useful information.
arXiv Detail & Related papers (2024-02-28T09:02:33Z) - MonoTDP: Twin Depth Perception for Monocular 3D Object Detection in
Adverse Scenes [49.21187418886508]
This paper proposes a monocular 3D detection model designed to perceive twin depth in adverse scenes, termed MonoTDP.
We first introduce an adaptive learning strategy to aid the model in handling uncontrollable weather conditions, significantly resisting degradation caused by various degrading factors.
Then, to address the depth/content loss in adverse regions, we propose a novel twin depth perception module that simultaneously estimates scene and object depth.
arXiv Detail & Related papers (2023-05-18T13:42:02Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Vision-Based Environmental Perception for Autonomous Driving [4.138893879750758]
Visual perception plays an important role in autonomous driving.
Recent development of deep learning-based method has better reliability and processing speed.
Monocular camera uses image data from a single viewpoint to estimate object depth.
Simultaneous Location and Mapping (SLAM) can establish a model of the road environment.
arXiv Detail & Related papers (2022-12-22T01:59:58Z) - ColorSense: A Study on Color Vision in Machine Visual Recognition [57.916512479603064]
We collect 110,000 non-trivial human annotations of foreground and background color labels from visual recognition benchmarks.
We validate the use of our datasets by demonstrating that the level of color discrimination has a dominating effect on the performance of machine perception models.
Our findings suggest that object recognition tasks such as classification and localization are susceptible to color vision bias.
arXiv Detail & Related papers (2022-12-16T18:51:41Z) - Task-Driven Deep Image Enhancement Network for Autonomous Driving in Bad
Weather [5.416049433853457]
In bad weather, visual perception is greatly affected by several degrading effects.
We introduce a new task-driven training strategy to guide the high-level task model suitable for both high-quality restoration of images and highly accurate perception.
Experiment results demonstrate that the proposed method improves the performance among lane and 2D object detection, and depth estimation largely under adverse weather.
arXiv Detail & Related papers (2021-10-14T08:03:33Z) - Learning Perceptual Locomotion on Uneven Terrains using Sparse Visual
Observations [75.60524561611008]
This work aims to exploit the use of sparse visual observations to achieve perceptual locomotion over a range of commonly seen bumps, ramps, and stairs in human-centred environments.
We first formulate the selection of minimal visual input that can represent the uneven surfaces of interest, and propose a learning framework that integrates such exteroceptive and proprioceptive data.
We validate the learned policy in tasks that require omnidirectional walking over flat ground and forward locomotion over terrains with obstacles, showing a high success rate.
arXiv Detail & Related papers (2021-09-28T20:25:10Z) - Provident Vehicle Detection at Night for Advanced Driver Assistance
Systems [3.7468898363447654]
We present a complete system capable of providingntly detect oncoming vehicles at nighttime based on their caused light artifacts.
We quantify the time benefit that the provident vehicle detection system provides compared to an in-production computer vision system.
arXiv Detail & Related papers (2021-07-23T15:27:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.