On the Assessment of Sensitivity of Autonomous Vehicle Perception
- URL: http://arxiv.org/abs/2602.00314v1
- Date: Fri, 30 Jan 2026 21:06:05 GMT
- Title: On the Assessment of Sensitivity of Autonomous Vehicle Perception
- Authors: Apostol Vassilev, Munawar Hasan, Edward Griffor, Honglan Jin, Pavel Piliptchak, Mahima Arora, Thoshitha Gamage,
- Abstract summary: The viability of automated driving is heavily dependent on the performance of perception systems.<n>We evaluate perception performance using predictive sensitivity quantification based on an ensemble of models.<n>A perception assessment criterion is developed based on an AV's stopping distance at a stop sign on varying road surfaces.
- Score: 0.13858851827255522
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The viability of automated driving is heavily dependent on the performance of perception systems to provide real-time accurate and reliable information for robust decision-making and maneuvers. These systems must perform reliably not only under ideal conditions, but also when challenged by natural and adversarial driving factors. Both of these types of interference can lead to perception errors and delays in detection and classification. Hence, it is essential to assess the robustness of the perception systems of automated vehicles (AVs) and explore strategies for making perception more reliable. We approach this problem by evaluating perception performance using predictive sensitivity quantification based on an ensemble of models, capturing model disagreement and inference variability across multiple models, under adverse driving scenarios in both simulated environments and real-world conditions. A notional architecture for assessing perception performance is proposed. A perception assessment criterion is developed based on an AV's stopping distance at a stop sign on varying road surfaces, such as dry and wet asphalt, and vehicle speed. Five state-of-the-art computer vision models are used, including YOLO (v8-v9), DEtection TRansformer (DETR50, DETR101), Real-Time DEtection TRansformer (RT-DETR)in our experiments. Diminished lighting conditions, e.g., resulting from the presence of fog and low sun altitude, have the greatest impact on the performance of the perception models. Additionally, adversarial road conditions such as occlusions of roadway objects increase perception sensitivity and model performance drops when faced with a combination of adversarial road conditions and inclement weather conditions. Also, it is demonstrated that the greater the distance to a roadway object, the greater the impact on perception performance, hence diminished perception robustness.
Related papers
- Robustness of Object Detection of Autonomous Vehicles in Adverse Weather Conditions [2.4690347153946237]
This paper proposes a method for evaluating the robustness of object detection ML models in autonomous vehicles under adverse weather conditions.<n>It employs data augmentation operators to generate synthetic data that simulates different severance degrees of the adverse operation conditions.<n>The robustness of the object detection model is measured by the average first failure coefficients (AFFC) over the input images in the benchmark.
arXiv Detail & Related papers (2026-02-13T13:02:44Z) - ROAR: Robust Accident Recognition and Anticipation for Autonomous Driving [17.936492070548]
Existing methods often assume ideal conditions, overlooking challenges such as sensor failures, environmental disturbances, and data imperfections.<n>This study introduces ROAR, a novel approach for accident detection and prediction.<n> ROAR combines Discrete Wavelet Transform (DWT), a self adaptive object aware module, and dynamic focal loss to tackle these challenges.
arXiv Detail & Related papers (2025-11-09T04:55:37Z) - Object detection in adverse weather conditions for autonomous vehicles using Instruct Pix2Pix [1.15692661299731]
Enhancing robustness of object detection systems under adverse weather conditions is crucial for the advancement of autonomous driving technology.<n>This study presents a novel approach leveraging the diffusion model Instruct Pix2Pix to generate realistic datasets with weather-based augmentations.
arXiv Detail & Related papers (2025-05-13T05:12:07Z) - Natural Reflection Backdoor Attack on Vision Language Model for Autonomous Driving [55.96227460521096]
Vision-Language Models (VLMs) have been integrated into autonomous driving systems to enhance reasoning capabilities.<n>We propose a natural reflection-based backdoor attack targeting VLM systems in autonomous driving scenarios.<n>Our findings uncover a new class of attacks that exploit the stringent real-time requirements of autonomous driving.
arXiv Detail & Related papers (2025-05-09T20:28:17Z) - LanEvil: Benchmarking the Robustness of Lane Detection to Environmental Illusions [61.87108000328186]
Lane detection (LD) is an essential component of autonomous driving systems, providing fundamental functionalities like adaptive cruise control and automated lane centering.
Existing LD benchmarks primarily focus on evaluating common cases, neglecting the robustness of LD models against environmental illusions.
This paper studies the potential threats caused by these environmental illusions to LD and establishes the first comprehensive benchmark LanEvil.
arXiv Detail & Related papers (2024-06-03T02:12:27Z) - RACER: Rational Artificial Intelligence Car-following-model Enhanced by Reality [46.909086734963665]
This paper introduces RACER, a cutting-edge deep learning car-following model to predict Adaptive Cruise Control (ACC) driving behavior.<n>Unlike conventional models, RACER effectively integrates Rational Driving Constraints (RDCs), crucial tenets of actual driving.<n> RACER excels across key metrics, such as acceleration, velocity, and spacing, registering zero violations.
arXiv Detail & Related papers (2023-12-12T06:21:30Z) - DARTH: Holistic Test-time Adaptation for Multiple Object Tracking [87.72019733473562]
Multiple object tracking (MOT) is a fundamental component of perception systems for autonomous driving.
Despite the urge of safety in driving systems, no solution to the MOT adaptation problem to domain shift in test-time conditions has ever been proposed.
We introduce DARTH, a holistic test-time adaptation framework for MOT.
arXiv Detail & Related papers (2023-10-03T10:10:42Z) - Learning Terrain-Aware Kinodynamic Model for Autonomous Off-Road Rally
Driving With Model Predictive Path Integral Control [4.23755398158039]
We propose a method for learning terrain-aware kinodynamic model conditioned on both proprioceptive and exteroceptive information.
The proposed model generates reliable predictions of 6-degree-of-freedom motion and can even estimate contact interactions.
We demonstrate the effectiveness of our approach through experiments on a simulated off-road track, showing that our proposed model-controller pair outperforms the baseline.
arXiv Detail & Related papers (2023-05-01T06:09:49Z) - How Do We Fail? Stress Testing Perception in Autonomous Vehicles [40.19326157052966]
This paper presents a method for characterizing failures of LiDAR-based perception systems for autonomous vehicles in adverse weather conditions.
We develop a methodology based in reinforcement learning to find likely failures in object tracking and trajectory prediction due to sequences of disturbances.
arXiv Detail & Related papers (2022-03-26T20:48:09Z) - Vision in adverse weather: Augmentation using CycleGANs with various
object detectors for robust perception in autonomous racing [70.16043883381677]
In autonomous racing, the weather can change abruptly, causing significant degradation in perception, resulting in ineffective manoeuvres.
In order to improve detection in adverse weather, deep-learning-based models typically require extensive datasets captured in such conditions.
We introduce an approach of using synthesised adverse condition datasets in autonomous racing (generated using CycleGAN) to improve the performance of four out of five state-of-the-art detectors.
arXiv Detail & Related papers (2022-01-10T10:02:40Z) - Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of
Adverse Weather Conditions for 3D Object Detection [60.89616629421904]
Lidar-based object detectors are critical parts of the 3D perception pipeline in autonomous navigation systems such as self-driving cars.
They are sensitive to adverse weather conditions such as rain, snow and fog due to reduced signal-to-noise ratio (SNR) and signal-to-background ratio (SBR)
arXiv Detail & Related papers (2021-07-14T21:10:47Z) - Worsening Perception: Real-time Degradation of Autonomous Vehicle
Perception Performance for Simulation of Adverse Weather Conditions [47.529411576737644]
This study explores the potential of using a simple, lightweight image augmentation system in an autonomous racing vehicle.
With minimal adjustment, the prototype system can replicate the effects of both water droplets on the camera lens, and fading light conditions.
arXiv Detail & Related papers (2021-03-03T23:49:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.