Weather-Dependent Variations in Driver Gaze Behavior: A Case Study in Rainy Conditions
- URL: http://arxiv.org/abs/2509.01013v1
- Date: Sun, 31 Aug 2025 22:33:30 GMT
- Title: Weather-Dependent Variations in Driver Gaze Behavior: A Case Study in Rainy Conditions
- Authors: Ghazal Farhani, Taufiq Rahman, Dominique Charlebois,
- Abstract summary: This case study investigates the eye gaze behavior of a driver operating the same highway route under both clear and rainy conditions.<n>Rainy conditions lead to more frequent dashboard glances, longer fixation durations, and higher gaze elevation, indicating increased cognitive focus.
- Score: 0.8602553195689513
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Rainy weather significantly increases the risk of road accidents due to reduced visibility and vehicle traction. Understanding how experienced drivers adapt their visual perception through gaze behavior under such conditions is critical for designing robust driver monitoring systems (DMS) and for informing advanced driver assistance systems (ADAS). This case study investigates the eye gaze behavior of a driver operating the same highway route under both clear and rainy conditions. To this end, gaze behavior was analyzed by a two-step clustering approach: first, clustering gaze points within 10-second intervals, and then aggregating cluster centroids into meta-clusters. This, along with Markov transition matrices and metrics such as fixation duration, gaze elevation, and azimuth distributions, reveals meaningful behavioral shifts. While the overall gaze behavior focused on the road with occasional mirror checks remains consistent, rainy conditions lead to more frequent dashboard glances, longer fixation durations, and higher gaze elevation, indicating increased cognitive focus. These findings offer valuable insight into visual attention patterns under adverse conditions and highlight the potential of leveraging gaze modeling to aid in the design of more robust ADAS and DMS.
Related papers
- On the Assessment of Sensitivity of Autonomous Vehicle Perception [0.13858851827255522]
The viability of automated driving is heavily dependent on the performance of perception systems.<n>We evaluate perception performance using predictive sensitivity quantification based on an ensemble of models.<n>A perception assessment criterion is developed based on an AV's stopping distance at a stop sign on varying road surfaces.
arXiv Detail & Related papers (2026-01-30T21:06:05Z) - AVOID: The Adverse Visual Conditions Dataset with Obstacles for Driving Scene Understanding [48.97660297411286]
We introduce AVOID, a new dataset for real-time obstacle detection in a simulated environment.<n>AVOID consists of a large set of unexpected road obstacles located along each path captured under various weather and time conditions.<n>Each image is coupled with the corresponding semantic and depth maps, raw and semantic LiDAR data, and waypoints.
arXiv Detail & Related papers (2025-12-29T05:34:26Z) - Classification of Driver Behaviour Using External Observation Techniques for Autonomous Vehicles [0.7734726150561086]
This study introduces a novel driver behaviour classification system that uses external observation techniques to detect indicators of distraction and impairment.<n>The proposed framework employs advanced computer vision methodologies, including real-time object tracking, lateral displacement analysis, and lane position monitoring.<n>Unlike systems reliant on inter-vehicular communication, this vision-based approach enables behavioural analysis of non-connected vehicles.
arXiv Detail & Related papers (2025-09-11T11:05:14Z) - Natural Reflection Backdoor Attack on Vision Language Model for Autonomous Driving [55.96227460521096]
Vision-Language Models (VLMs) have been integrated into autonomous driving systems to enhance reasoning capabilities.<n>We propose a natural reflection-based backdoor attack targeting VLM systems in autonomous driving scenarios.<n>Our findings uncover a new class of attacks that exploit the stringent real-time requirements of autonomous driving.
arXiv Detail & Related papers (2025-05-09T20:28:17Z) - GARLIC: GPT-Augmented Reinforcement Learning with Intelligent Control for Vehicle Dispatching [81.82487256783674]
GARLIC: a framework of GPT-Augmented Reinforcement Learning with Intelligent Control for vehicle dispatching.<n>This paper introduces GARLIC: a framework of GPT-Augmented Reinforcement Learning with Intelligent Control for vehicle dispatching.
arXiv Detail & Related papers (2024-08-19T08:23:38Z) - Snowy Scenes,Clear Detections: A Robust Model for Traffic Light Detection in Adverse Weather Conditions [5.208045772970408]
Adverse weather presents major challenges for current detection systems, often resulting in failures and potential safety risks.
This paper introduces a novel framework and pipeline designed to improve object detection under such conditions.
Results show a 40.8% improvement in average IoU and F1 scores compared to naive fine-tuning.
arXiv Detail & Related papers (2024-06-19T11:52:12Z) - In-vehicle Sensing and Data Analysis for Older Drivers with Mild
Cognitive Impairment [0.8426358786287627]
The objectives of this paper include designing low-cost in-vehicle sensing hardware capable of obtaining high-precision positioning and telematics data.
Our statistical analysis comparing drivers with mild cognitive impairment (MCI) to those without reveals that those with MCI exhibit smoother and safer driving patterns.
Our Random Forest models identified the number of night trips, number of trips, and education as the most influential factors in our data evaluation.
arXiv Detail & Related papers (2023-11-15T15:47:24Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Cognitive Accident Prediction in Driving Scenes: A Multimodality
Benchmark [77.54411007883962]
We propose a Cognitive Accident Prediction (CAP) method that explicitly leverages human-inspired cognition of text description on the visual observation and the driver attention to facilitate model training.
CAP is formulated by an attentive text-to-vision shift fusion module, an attentive scene context transfer module, and the driver attention guided accident prediction module.
We construct a new large-scale benchmark consisting of 11,727 in-the-wild accident videos with over 2.19 million frames.
arXiv Detail & Related papers (2022-12-19T11:43:02Z) - Vision in adverse weather: Augmentation using CycleGANs with various
object detectors for robust perception in autonomous racing [70.16043883381677]
In autonomous racing, the weather can change abruptly, causing significant degradation in perception, resulting in ineffective manoeuvres.
In order to improve detection in adverse weather, deep-learning-based models typically require extensive datasets captured in such conditions.
We introduce an approach of using synthesised adverse condition datasets in autonomous racing (generated using CycleGAN) to improve the performance of four out of five state-of-the-art detectors.
arXiv Detail & Related papers (2022-01-10T10:02:40Z) - Towards Safer Transportation: a self-supervised learning approach for
traffic video deraining [0.9281671380673306]
This study proposes a two-stage self-supervised learning method to remove rain streaks in traffic videos.
The results indicated that the model exhibits satisfactory performance in terms of the image visual quality and the Peak Signal-Noise Ratio value.
arXiv Detail & Related papers (2021-10-11T19:17:07Z) - A Benchmark for Spray from Nearby Cutting Vehicles [7.767933159959353]
This publication presents a testing methodology for disturbances from spray.
It introduces a novel lightweight and spray setup alongside an evaluation scheme to assess the disturbances caused by spray.
In a common scenario of a closely cutting vehicle, it is visible that the distortions are severely affecting the perception stack up to four seconds.
arXiv Detail & Related papers (2021-08-24T15:40:09Z) - When Do Drivers Concentrate? Attention-based Driver Behavior Modeling
With Deep Reinforcement Learning [8.9801312307912]
We propose an actor-critic method to approximate a driver' s action according to observations and measure the driver' s attention allocation.
Considering reaction time, we construct the attention mechanism in the actor network to capture temporal dependencies of consecutive observations.
We conduct experiments on real-world vehicle trajectory datasets and show that the accuracy of our proposed approach outperforms seven baseline algorithms.
arXiv Detail & Related papers (2020-02-26T09:56:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.