Identifying Out-of-Distribution Samples in Real-Time for Safety-Critical
2D Object Detection with Margin Entropy Loss
- URL: http://arxiv.org/abs/2209.00364v1
- Date: Thu, 1 Sep 2022 11:14:57 GMT
- Title: Identifying Out-of-Distribution Samples in Real-Time for Safety-Critical
2D Object Detection with Margin Entropy Loss
- Authors: Yannik Blei, Nicolas Jourdan, Nils G\"ahlert
- Abstract summary: We present an approach to enable OOD detection for 2D object detection by employing the margin entropy (ME) loss.
A CNN trained with the ME loss significantly outperforms OOD detection using standard confidence scores.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Convolutional Neural Networks (CNNs) are nowadays often employed in
vision-based perception stacks for safetycritical applications such as
autonomous driving or Unmanned Aerial Vehicles (UAVs). Due to the safety
requirements in those use cases, it is important to know the limitations of the
CNN and, thus, to detect Out-of-Distribution (OOD) samples. In this work, we
present an approach to enable OOD detection for 2D object detection by
employing the margin entropy (ME) loss. The proposed method is easy to
implement and can be applied to most existing object detection architectures.
In addition, we introduce Separability as a metric for detecting OOD samples in
object detection. We show that a CNN trained with the ME loss significantly
outperforms OOD detection using standard confidence scores. At the same time,
the runtime of the underlying object detection framework remains constant
rendering the ME loss a powerful tool to enable OOD detection.
Related papers
- Revisiting Out-of-Distribution Detection in LiDAR-based 3D Object Detection [12.633311483061647]
Out-of-distribution (OOD) objects can lead to misclassifications, posing a significant risk to the safety and reliability of automated vehicles.
We propose a new evaluation protocol that allows the use of existing datasets without modifying the point cloud.
The effectiveness of our method is validated through experiments on the newly proposed nuScenes OOD benchmark.
arXiv Detail & Related papers (2024-04-24T13:48:38Z) - Run-time Introspection of 2D Object Detection in Automated Driving
Systems Using Learning Representations [13.529124221397822]
We introduce a novel introspection solution for 2D object detection based on Deep Neural Networks (DNNs)
We implement several state-of-the-art (SOTA) introspection mechanisms for error detection in 2D object detection, using one-stage and two-stage object detectors evaluated on KITTI and BDD datasets.
Our performance evaluation shows that the proposed introspection solution outperforms SOTA methods, achieving an absolute reduction in the missed error ratio of 9% to 17% in the BDD dataset.
arXiv Detail & Related papers (2024-03-02T10:56:14Z) - Detecting Out-of-distribution Objects Using Neuron Activation Patterns [0.0]
We introduce Neuron Activation PaTteRns for out-of-distribution samples detection in Object detectioN (NAPTRON)
Our approach outperforms state-of-the-art methods, without the need to affect in-distribution (ID) performance.
We have created the largest open-source benchmark for OOD object detection.
arXiv Detail & Related papers (2023-07-31T06:41:26Z) - SR-OOD: Out-of-Distribution Detection via Sample Repairing [48.272537939227206]
Out-of-distribution (OOD) detection is a crucial task for ensuring the reliability and robustness of machine learning models.
Recent works have shown that generative models often assign high confidence scores to OOD samples, indicating that they fail to capture the semantic information of the data.
We take advantage of sample repairing and propose a novel OOD detection framework, namely SR-OOD.
Our framework achieves superior performance over the state-of-the-art generative methods in OOD detection.
arXiv Detail & Related papers (2023-05-26T16:35:20Z) - Out-of-Distribution Detection for LiDAR-based 3D Object Detection [8.33476679218773]
3D object detection is an essential part of automated driving.
Deep models are notorious for assigning high confidence scores to out-of-distribution (OOD) inputs.
In this paper, we focus on the detection of OOD inputs for LiDAR-based 3D object detection.
arXiv Detail & Related papers (2022-09-28T21:39:25Z) - Triggering Failures: Out-Of-Distribution detection by learning from
local adversarial attacks in Semantic Segmentation [76.2621758731288]
We tackle the detection of out-of-distribution (OOD) objects in semantic segmentation.
Our main contribution is a new OOD detection architecture called ObsNet associated with a dedicated training scheme based on Local Adversarial Attacks (LAA)
We show it obtains top performances both in speed and accuracy when compared to ten recent methods of the literature on three different datasets.
arXiv Detail & Related papers (2021-08-03T17:09:56Z) - Provably Robust Detection of Out-of-distribution Data (almost) for free [124.14121487542613]
Deep neural networks are known to produce highly overconfident predictions on out-of-distribution (OOD) data.
In this paper we propose a novel method where from first principles we combine a certifiable OOD detector with a standard classifier into an OOD aware classifier.
In this way we achieve the best of two worlds: certifiably adversarially robust OOD detection, even for OOD samples close to the in-distribution, without loss in prediction accuracy and close to state-of-the-art OOD detection performance for non-manipulated OOD data.
arXiv Detail & Related papers (2021-06-08T11:40:49Z) - Slender Object Detection: Diagnoses and Improvements [74.40792217534]
In this paper, we are concerned with the detection of a particular type of objects with extreme aspect ratios, namely textbfslender objects.
For a classical object detection method, a drastic drop of $18.9%$ mAP on COCO is observed, if solely evaluated on slender objects.
arXiv Detail & Related papers (2020-11-17T09:39:42Z) - Out-of-Distribution Detection for Automotive Perception [58.34808836642603]
Neural networks (NNs) are widely used for object classification in autonomous driving.
NNs can fail on input data not well represented by the training dataset, known as out-of-distribution (OOD) data.
This paper presents a method for determining whether inputs are OOD, which does not require OOD data during training and does not increase the computational cost of inference.
arXiv Detail & Related papers (2020-11-03T01:46:35Z) - TOG: Targeted Adversarial Objectness Gradient Attacks on Real-time
Object Detection Systems [14.976840260248913]
This paper presents three Targeted adversarial Objectness Gradient attacks to cause object-vanishing, object-fabrication, and object-mislabeling attacks.
We also present a universal objectness gradient attack to use adversarial transferability for black-box attacks.
The results demonstrate serious adversarial vulnerabilities and the compelling need for developing robust object detection systems.
arXiv Detail & Related papers (2020-04-09T01:36:23Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.