Sensitivity analysis of AI-based algorithms for autonomous driving on
optical wavefront aberrations induced by the windshield
- URL: http://arxiv.org/abs/2308.11711v1
- Date: Sat, 19 Aug 2023 17:01:23 GMT
- Title: Sensitivity analysis of AI-based algorithms for autonomous driving on
optical wavefront aberrations induced by the windshield
- Authors: Dominik Werner Wolf and Markus Ulrich and Nikhil Kapoor
- Abstract summary: This paper investigates the domain shift problem by evaluating the sensitivity of two perception models to different windshield configurations.
Our results show that there is a performance gap introduced by windshields and existing optical metrics used for posing requirements might not be sufficient.
- Score: 4.542616945567623
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autonomous driving perception techniques are typically based on supervised
machine learning models that are trained on real-world street data. A typical
training process involves capturing images with a single car model and
windshield configuration. However, deploying these trained models on different
car types can lead to a domain shift, which can potentially hurt the neural
networks performance and violate working ADAS requirements. To address this
issue, this paper investigates the domain shift problem further by evaluating
the sensitivity of two perception models to different windshield
configurations. This is done by evaluating the dependencies between neural
network benchmark metrics and optical merit functions by applying a Fourier
optics based threat model. Our results show that there is a performance gap
introduced by windshields and existing optical metrics used for posing
requirements might not be sufficient.
Related papers
- Optical Flow Matters: an Empirical Comparative Study on Fusing Monocular Extracted Modalities for Better Steering [37.46760714516923]
This research introduces a new end-to-end method that exploits multimodal information from a single monocular camera to improve the steering predictions for self-driving cars.
By focusing on the fusion of RGB imagery with depth completion information or optical flow data, we propose a framework that integrates these modalities through both early and hybrid fusion techniques.
arXiv Detail & Related papers (2024-09-18T09:36:24Z) - Guiding Attention in End-to-End Driving Models [49.762868784033785]
Vision-based end-to-end driving models trained by imitation learning can lead to affordable solutions for autonomous driving.
We study how to guide the attention of these models to improve their driving quality by adding a loss term during training.
In contrast to previous work, our method does not require these salient semantic maps to be available during testing time.
arXiv Detail & Related papers (2024-04-30T23:18:51Z) - RainSD: Rain Style Diversification Module for Image Synthesis
Enhancement using Feature-Level Style Distribution [5.500457283114346]
This paper presents a synthetic road dataset with sensor blockage generated from real road dataset BDD100K.
Using this dataset, the degradation of diverse multi-task networks for autonomous driving has been thoroughly evaluated and analyzed.
The tendency of the performance degradation of deep neural network-based perception systems for autonomous vehicle has been analyzed in depth.
arXiv Detail & Related papers (2023-12-31T11:30:42Z) - RACER: Rational Artificial Intelligence Car-following-model Enhanced by
Reality [51.244807332133696]
This paper introduces RACER, a cutting-edge deep learning car-following model to predict Adaptive Cruise Control (ACC) driving behavior.
Unlike conventional models, RACER effectively integrates Rational Driving Constraints (RDCs), crucial tenets of actual driving.
RACER excels across key metrics, such as acceleration, velocity, and spacing, registering zero violations.
arXiv Detail & Related papers (2023-12-12T06:21:30Z) - DARTH: Holistic Test-time Adaptation for Multiple Object Tracking [87.72019733473562]
Multiple object tracking (MOT) is a fundamental component of perception systems for autonomous driving.
Despite the urge of safety in driving systems, no solution to the MOT adaptation problem to domain shift in test-time conditions has ever been proposed.
We introduce DARTH, a holistic test-time adaptation framework for MOT.
arXiv Detail & Related papers (2023-10-03T10:10:42Z) - Windscreen Optical Quality for AI Algorithms: Refractive Power and MTF
not Sufficient [74.2843502619298]
Automotive mass production processes require measurement systems that characterize the optical quality of the windscreens in a meaningful way.
In this article we demonstrate that the main metric established in the industry - refractive power - is fundamentally not capable of capturing relevant optical properties of windscreens.
We propose a novel concept to determine the optical quality of windscreens and to use simulation to link this optical quality to the performance of AI algorithms.
arXiv Detail & Related papers (2023-05-23T20:41:04Z) - CARNet: A Dynamic Autoencoder for Learning Latent Dynamics in Autonomous
Driving Tasks [11.489187712465325]
An autonomous driving system should effectively use the information collected from the various sensors in order to form an abstract description of the world.
Deep learning models, such as autoencoders, can be used for that purpose, as they can learn compact latent representations from a stream of incoming data.
This work proposes CARNet, a Combined dynAmic autoencodeR NETwork architecture that utilizes an autoencoder combined with a recurrent neural network to learn the current latent representation.
arXiv Detail & Related papers (2022-05-18T04:15:42Z) - Bidirectional Interaction between Visual and Motor Generative Models
using Predictive Coding and Active Inference [68.8204255655161]
We propose a neural architecture comprising a generative model for sensory prediction, and a distinct generative model for motor trajectories.
We highlight how sequences of sensory predictions can act as rails guiding learning, control and online adaptation of motor trajectories.
arXiv Detail & Related papers (2021-04-19T09:41:31Z) - Improving Generalization of Transfer Learning Across Domains Using
Spatio-Temporal Features in Autonomous Driving [45.655433907239804]
Vehicle simulation can be used to learn in the virtual world, and the acquired skills can be transferred to handle real-world scenarios.
These visual elements are intuitively crucial for human decision making during driving.
We propose a CNN+LSTM transfer learning framework to extract thetemporal-temporal features representing vehicle dynamics from scenes.
arXiv Detail & Related papers (2021-03-15T03:26:06Z) - Improving Robustness of Learning-based Autonomous Steering Using
Adversarial Images [58.287120077778205]
We introduce a framework for analyzing robustness of the learning algorithm w.r.t varying quality in the image input for autonomous driving.
Using the results of sensitivity analysis, we propose an algorithm to improve the overall performance of the task of "learning to steer"
arXiv Detail & Related papers (2021-02-26T02:08:07Z) - Driving Style Representation in Convolutional Recurrent Neural Network
Model of Driver Identification [8.007800530105191]
We present a deep-neural-network architecture, we term D-CRNN, for building high-fidelity representations for driving style.
Using CNN, we capture semantic patterns of driver behavior from trajectories.
We then find temporal dependencies between these semantic patterns using RNN to encode driving style.
arXiv Detail & Related papers (2021-02-11T04:33:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.