Evaluation of Out-of-Distribution Detection Performance on Autonomous
Driving Datasets
- URL: http://arxiv.org/abs/2401.17013v1
- Date: Tue, 30 Jan 2024 13:49:03 GMT
- Title: Evaluation of Out-of-Distribution Detection Performance on Autonomous
Driving Datasets
- Authors: Jens Henriksson, Christian Berger, Stig Ursing, Markus Borg
- Abstract summary: Safety measures need to be systemically investigated to what extent they evaluate the intended performance of Deep Neural Networks (DNNs)
This work evaluates rejecting outputs from semantic segmentation DNNs by applying a Mahalanobis distance (MD) based on the most probable class-conditional Gaussian distribution for the predicted class as an OOD score.
The applicability of our findings will support legitimizing safety measures and motivate their usage when arguing for safe usage of DNNs in automotive perception.
- Score: 5.000404730573809
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Safety measures need to be systemically investigated to what extent they
evaluate the intended performance of Deep Neural Networks (DNNs) for critical
applications. Due to a lack of verification methods for high-dimensional DNNs,
a trade-off is needed between accepted performance and handling of
out-of-distribution (OOD) samples.
This work evaluates rejecting outputs from semantic segmentation DNNs by
applying a Mahalanobis distance (MD) based on the most probable
class-conditional Gaussian distribution for the predicted class as an OOD
score. The evaluation follows three DNNs trained on the Cityscapes dataset and
tested on four automotive datasets and finds that classification risk can
drastically be reduced at the cost of pixel coverage, even when applied on
unseen datasets. The applicability of our findings will support legitimizing
safety measures and motivate their usage when arguing for safe usage of DNNs in
automotive perception.
Related papers
- Enumerating Safe Regions in Deep Neural Networks with Provable
Probabilistic Guarantees [86.1362094580439]
We introduce the AllDNN-Verification problem: given a safety property and a DNN, enumerate the set of all the regions of the property input domain which are safe.
Due to the #P-hardness of the problem, we propose an efficient approximation method called epsilon-ProVe.
Our approach exploits a controllable underestimation of the output reachable sets obtained via statistical prediction of tolerance limits.
arXiv Detail & Related papers (2023-08-18T22:30:35Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Detection of out-of-distribution samples using binary neuron activation
patterns [0.26249027950824505]
The ability to identify previously unseen inputs as novel is crucial in safety-critical applications such as self-driving cars, unmanned aerial vehicles, and robots.
Existing approaches to detect OOD samples treat a DNN as a black box and evaluate the confidence score of the output predictions.
In this work, we introduce a novel method for OOD detection. Our method is motivated by theoretical analysis of neuron activation patterns (NAP) in ReLU-based architectures.
arXiv Detail & Related papers (2022-12-29T11:42:46Z) - Out of Distribution Data Detection Using Dropout Bayesian Neural
Networks [29.84998820573774]
We first show how previous attempts to leverage the randomized embeddings induced by the intermediate layers of a dropout BNN can fail due to the distance metric used.
We introduce an alternative approach to measuring embedding uncertainty, justify its use theoretically, and demonstrate how incorporating embedding uncertainty improves OOD data identification across three tasks: image classification, language classification, and malware detection.
arXiv Detail & Related papers (2022-02-18T02:23:43Z) - On the Practicality of Deterministic Epistemic Uncertainty [106.06571981780591]
deterministic uncertainty methods (DUMs) achieve strong performance on detecting out-of-distribution data.
It remains unclear whether DUMs are well calibrated and can seamlessly scale to real-world applications.
arXiv Detail & Related papers (2021-07-01T17:59:07Z) - Provably Robust Detection of Out-of-distribution Data (almost) for free [124.14121487542613]
Deep neural networks are known to produce highly overconfident predictions on out-of-distribution (OOD) data.
In this paper we propose a novel method where from first principles we combine a certifiable OOD detector with a standard classifier into an OOD aware classifier.
In this way we achieve the best of two worlds: certifiably adversarially robust OOD detection, even for OOD samples close to the in-distribution, without loss in prediction accuracy and close to state-of-the-art OOD detection performance for non-manipulated OOD data.
arXiv Detail & Related papers (2021-06-08T11:40:49Z) - Statistical Testing for Efficient Out of Distribution Detection in Deep
Neural Networks [26.0303701309125]
This paper frames the Out Of Distribution (OOD) detection problem in Deep Neural Networks as a statistical hypothesis testing problem.
We build on this framework to suggest a novel OOD procedure based on low-order statistics.
Our method achieves comparable or better than state-of-the-art results on well-accepted OOD benchmarks without retraining the network parameters.
arXiv Detail & Related papers (2021-02-25T16:14:47Z) - Sketching Curvature for Efficient Out-of-Distribution Detection for Deep
Neural Networks [32.629801680158685]
Sketching Curvature of OoD Detection (SCOD) is an architecture-agnostic framework for equipping trained Deep Neural Networks with task-relevant uncertainty estimates.
We demonstrate that SCOD achieves comparable or better OoD detection performance with lower computational burden relative to existing baselines.
arXiv Detail & Related papers (2021-02-24T21:34:40Z) - Out-of-Distribution Detection for Automotive Perception [58.34808836642603]
Neural networks (NNs) are widely used for object classification in autonomous driving.
NNs can fail on input data not well represented by the training dataset, known as out-of-distribution (OOD) data.
This paper presents a method for determining whether inputs are OOD, which does not require OOD data during training and does not increase the computational cost of inference.
arXiv Detail & Related papers (2020-11-03T01:46:35Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.