Optical aberrations in autonomous driving: Physics-informed parameterized temperature scaling for neural network uncertainty calibration
- URL: http://arxiv.org/abs/2412.13695v1
- Date: Wed, 18 Dec 2024 10:36:46 GMT
- Title: Optical aberrations in autonomous driving: Physics-informed parameterized temperature scaling for neural network uncertainty calibration
- Authors: Dominik Werner Wolf, Alexander Braun, Markus Ulrich,
- Abstract summary: We propose to incorporate a physical inductive bias into the neural network calibration architecture to enhance the robustness and the trustworthiness of the AI target application.
We pave the way for a trustworthy uncertainty representation and for a holistic verification strategy of the perception chain.
- Score: 49.03824084306578
- License:
- Abstract: 'A trustworthy representation of uncertainty is desirable and should be considered as a key feature of any machine learning method' (Huellermeier and Waegeman, 2021). This conclusion of Huellermeier et al. underpins the importance of calibrated uncertainties. Since AI-based algorithms are heavily impacted by dataset shifts, the automotive industry needs to safeguard its system against all possible contingencies. One important but often neglected dataset shift is caused by optical aberrations induced by the windshield. For the verification of the perception system performance, requirements on the AI performance need to be translated into optical metrics by a bijective mapping (Braun, 2023). Given this bijective mapping it is evident that the optical system characteristics add additional information about the magnitude of the dataset shift. As a consequence, we propose to incorporate a physical inductive bias into the neural network calibration architecture to enhance the robustness and the trustworthiness of the AI target application, which we demonstrate by using a semantic segmentation task as an example. By utilizing the Zernike coefficient vector of the optical system as a physical prior we can significantly reduce the mean expected calibration error in case of optical aberrations. As a result, we pave the way for a trustworthy uncertainty representation and for a holistic verification strategy of the perception chain.
Related papers
- Quantum-enhanced learning with a controllable bosonic variational sensor network [0.40964539027092906]
Supervised learning assisted by an entangled sensor network (SLAEN)
We propose a SLAEN capable of handling nonlinear data classification tasks.
We uncover a threshold phenomenon in classification error -- when the energy of probes exceeds a certain threshold, the error drastically to zero.
arXiv Detail & Related papers (2024-04-28T19:41:40Z) - Mutual Information-calibrated Conformal Feature Fusion for
Uncertainty-Aware Multimodal 3D Object Detection at the Edge [1.7898305876314982]
Three-dimensional (3D) object detection, a critical robotics operation, has seen significant advancements.
Our study integrates the principles of conformal inference with information theoretic measures to perform lightweight, Monte Carlo-free uncertainty estimation.
The framework demonstrates comparable or better performance in KITTI 3D object detection benchmarks to similar methods that are not uncertainty-aware.
arXiv Detail & Related papers (2023-09-18T09:02:44Z) - Classification robustness to common optical aberrations [64.08840063305313]
This paper proposes OpticsBench, a benchmark for investigating robustness to realistic, practically relevant optical blur effects.
Experiments on ImageNet show that for a variety of different pre-trained DNNs, the performance varies strongly compared to disk-shaped kernels.
We show on ImageNet-100 with OpticsAugment that can be increased by using optical kernels as data augmentation.
arXiv Detail & Related papers (2023-08-29T08:36:00Z) - Lightweight, Uncertainty-Aware Conformalized Visual Odometry [2.429910016019183]
Data-driven visual odometry (VO) is a critical subroutine for autonomous edge robotics.
Emerging edge robotics devices like insect-scale drones and surgical robots lack a computationally efficient framework to estimate VO's predictive uncertainties.
This paper presents a novel, lightweight, and statistically robust framework that leverages conformal inference (CI) to extract VO's uncertainty bands.
arXiv Detail & Related papers (2023-03-03T20:37:55Z) - Hybrid Predictive Coding: Inferring, Fast and Slow [62.997667081978825]
We propose a hybrid predictive coding network that combines both iterative and amortized inference in a principled manner.
We demonstrate that our model is inherently sensitive to its uncertainty and adaptively balances balances to obtain accurate beliefs using minimum computational expense.
arXiv Detail & Related papers (2022-04-05T12:52:45Z) - Information-Theoretic Odometry Learning [83.36195426897768]
We propose a unified information theoretic framework for learning-motivated methods aimed at odometry estimation.
The proposed framework provides an elegant tool for performance evaluation and understanding in information-theoretic language.
arXiv Detail & Related papers (2022-03-11T02:37:35Z) - Quantum readout of imperfect classical data [0.0]
We propose an optimized quantum sensing protocol to maximize the readout accuracy in presence of imprecise writing.
This work have implications for identification of pattern in biological system, in spectrophotometry, and whenever the information can be extracted from a transmission/reflection optical measurement.
arXiv Detail & Related papers (2022-02-02T18:34:17Z) - Automatic differentiation approach for reconstructing spectral functions
with neural networks [30.015034534260664]
We propose an automatic differentiation framework as a generic tool for the reconstruction from observable data.
We represent the spectra by neural networks and set chi-square as loss function to optimize the parameters with backward automatic differentiation unsupervisedly.
The reconstruction accuracy is assessed through Kullback-Leibler(KL) divergence and mean square error(MSE) at multiple noise levels.
arXiv Detail & Related papers (2021-12-12T11:21:57Z) - Attention that does not Explain Away [54.42960937271612]
Models based on the Transformer architecture have achieved better accuracy than the ones based on competing architectures for a large set of tasks.
A unique feature of the Transformer is its universal application of a self-attention mechanism, which allows for free information flow at arbitrary distances.
We propose a doubly-normalized attention scheme that is simple to implement and provides theoretical guarantees for avoiding the "explaining away" effect.
arXiv Detail & Related papers (2020-09-29T21:05:39Z) - Calibrating Deep Neural Networks using Focal Loss [77.92765139898906]
Miscalibration is a mismatch between a model's confidence and its correctness.
We show that focal loss allows us to learn models that are already very well calibrated.
We show that our approach achieves state-of-the-art calibration without compromising on accuracy in almost all cases.
arXiv Detail & Related papers (2020-02-21T17:35:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.