Out-of-Distribution Detection for Automotive Perception
- URL: http://arxiv.org/abs/2011.01413v2
- Date: Mon, 6 Sep 2021 03:43:53 GMT
- Title: Out-of-Distribution Detection for Automotive Perception
- Authors: Julia Nitsch, Masha Itkina, Ransalu Senanayake, Juan Nieto, Max
Schmidt, Roland Siegwart, Mykel J. Kochenderfer, and Cesar Cadena
- Abstract summary: Neural networks (NNs) are widely used for object classification in autonomous driving.
NNs can fail on input data not well represented by the training dataset, known as out-of-distribution (OOD) data.
This paper presents a method for determining whether inputs are OOD, which does not require OOD data during training and does not increase the computational cost of inference.
- Score: 58.34808836642603
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks (NNs) are widely used for object classification in autonomous
driving. However, NNs can fail on input data not well represented by the
training dataset, known as out-of-distribution (OOD) data. A mechanism to
detect OOD samples is important for safety-critical applications, such as
automotive perception, to trigger a safe fallback mode. NNs often rely on
softmax normalization for confidence estimation, which can lead to high
confidences being assigned to OOD samples, thus hindering the detection of
failures. This paper presents a method for determining whether inputs are OOD,
which does not require OOD data during training and does not increase the
computational cost of inference. The latter property is especially important in
automotive applications with limited computational resources and real-time
constraints. Our proposed approach outperforms state-of-the-art methods on
real-world automotive datasets.
Related papers
- Evaluation of Out-of-Distribution Detection Performance on Autonomous
Driving Datasets [5.000404730573809]
Safety measures need to be systemically investigated to what extent they evaluate the intended performance of Deep Neural Networks (DNNs)
This work evaluates rejecting outputs from semantic segmentation DNNs by applying a Mahalanobis distance (MD) based on the most probable class-conditional Gaussian distribution for the predicted class as an OOD score.
The applicability of our findings will support legitimizing safety measures and motivate their usage when arguing for safe usage of DNNs in automotive perception.
arXiv Detail & Related papers (2024-01-30T13:49:03Z) - AUTO: Adaptive Outlier Optimization for Online Test-Time OOD Detection [81.49353397201887]
Out-of-distribution (OOD) detection is crucial to deploying machine learning models in open-world applications.
We introduce a novel paradigm called test-time OOD detection, which utilizes unlabeled online data directly at test time to improve OOD detection performance.
We propose adaptive outlier optimization (AUTO), which consists of an in-out-aware filter, an ID memory bank, and a semantically-consistent objective.
arXiv Detail & Related papers (2023-03-22T02:28:54Z) - Uncertainty-Estimation with Normalized Logits for Out-of-Distribution
Detection [35.539218522504605]
Uncertainty-Estimation with Normalized Logits (UE-NL) is a robust learning method for OOD detection.
UE-NL treats every ID sample equally by predicting the uncertainty score of input data.
It is more robust to noisy ID data that may be misjudged as OOD data by other methods.
arXiv Detail & Related papers (2023-02-15T11:57:09Z) - Detection of out-of-distribution samples using binary neuron activation
patterns [0.26249027950824505]
The ability to identify previously unseen inputs as novel is crucial in safety-critical applications such as self-driving cars, unmanned aerial vehicles, and robots.
Existing approaches to detect OOD samples treat a DNN as a black box and evaluate the confidence score of the output predictions.
In this work, we introduce a novel method for OOD detection. Our method is motivated by theoretical analysis of neuron activation patterns (NAP) in ReLU-based architectures.
arXiv Detail & Related papers (2022-12-29T11:42:46Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - Pseudo-OOD training for robust language models [78.15712542481859]
OOD detection is a key component of a reliable machine-learning model for any industry-scale application.
We propose POORE - POsthoc pseudo-Ood REgularization, that generates pseudo-OOD samples using in-distribution (IND) data.
We extensively evaluate our framework on three real-world dialogue systems, achieving new state-of-the-art in OOD detection.
arXiv Detail & Related papers (2022-10-17T14:32:02Z) - Igeood: An Information Geometry Approach to Out-of-Distribution
Detection [35.04325145919005]
We introduce Igeood, an effective method for detecting out-of-distribution (OOD) samples.
Igeood applies to any pre-trained neural network, works under various degrees of access to the machine learning model.
We show that Igeood outperforms competing state-of-the-art methods on a variety of network architectures and datasets.
arXiv Detail & Related papers (2022-03-15T11:26:35Z) - Provably Robust Detection of Out-of-distribution Data (almost) for free [124.14121487542613]
Deep neural networks are known to produce highly overconfident predictions on out-of-distribution (OOD) data.
In this paper we propose a novel method where from first principles we combine a certifiable OOD detector with a standard classifier into an OOD aware classifier.
In this way we achieve the best of two worlds: certifiably adversarially robust OOD detection, even for OOD samples close to the in-distribution, without loss in prediction accuracy and close to state-of-the-art OOD detection performance for non-manipulated OOD data.
arXiv Detail & Related papers (2021-06-08T11:40:49Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.