Entropy Maximization and Meta Classification for Out-Of-Distribution
Detection in Semantic Segmentation
- URL: http://arxiv.org/abs/2012.06575v1
- Date: Wed, 9 Dec 2020 11:01:06 GMT
- Title: Entropy Maximization and Meta Classification for Out-Of-Distribution
Detection in Semantic Segmentation
- Authors: Robin Chan, Matthias Rottmann, Hanno Gottschalk
- Abstract summary: "Out-of-distribution" (OoD) samples are crucial for many applications such as automated driving.
A natural baseline approach to OoD detection is to threshold on the pixel-wise softmax entropy.
We present a two-step procedure that significantly improves that approach.
- Score: 7.305019142196585
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) for the semantic segmentation of images are
usually trained to operate on a predefined closed set of object classes. This
is in contrast to the "open world" setting where DNNs are envisioned to be
deployed to. From a functional safety point of view, the ability to detect
so-called "out-of-distribution" (OoD) samples, i.e., objects outside of a DNN's
semantic space, is crucial for many applications such as automated driving. A
natural baseline approach to OoD detection is to threshold on the pixel-wise
softmax entropy. We present a two-step procedure that significantly improves
that approach. Firstly, we utilize samples from the COCO dataset as OoD proxy
and introduce a second training objective to maximize the softmax entropy on
these samples. Starting from pretrained semantic segmentation networks we
re-train a number of DNNs on different in-distribution datasets and
consistently observe improved OoD detection performance when evaluating on
completely disjoint OoD datasets. Secondly, we perform a transparent
post-processing step to discard false positive OoD samples by so-called "meta
classification". To this end, we apply linear models to a set of hand-crafted
metrics derived from the DNN's softmax probabilities. In our experiments we
consistently observe a clear additional gain in OoD detection performance,
cutting down the number of detection errors by up to 52% when comparing the
best baseline with our results. We achieve this improvement sacrificing only
marginally in original segmentation performance. Therefore, our method
contributes to safer DNNs with more reliable overall system performance.
Related papers
- Tilt your Head: Activating the Hidden Spatial-Invariance of Classifiers [0.7704032792820767]
Deep neural networks are applied in more and more areas of everyday life.
They still lack essential abilities, such as robustly dealing with spatially transformed input signals.
We propose a novel technique to emulate such an inference process for neural nets.
arXiv Detail & Related papers (2024-05-06T09:47:29Z) - Domain Adaptive Synapse Detection with Weak Point Annotations [63.97144211520869]
We present AdaSyn, a framework for domain adaptive synapse detection with weak point annotations.
In the WASPSYN challenge at I SBI 2023, our method ranks the 1st place.
arXiv Detail & Related papers (2023-08-31T05:05:53Z) - Pixel-wise Gradient Uncertainty for Convolutional Neural Networks
applied to Out-of-Distribution Segmentation [0.43512163406552007]
We present a method for obtaining uncertainty scores from pixel-wise loss gradients which can be computed efficiently during inference.
Our experiments show the ability of our method to identify wrong pixel classifications and to estimate prediction quality at negligible computational overhead.
arXiv Detail & Related papers (2023-03-13T08:37:59Z) - OccRob: Efficient SMT-Based Occlusion Robustness Verification of Deep
Neural Networks [7.797299214812479]
Occlusion is a prevalent and easily realizable semantic perturbation to deep neural networks (DNNs)
It can fool a DNN into misclassifying an input image by occluding some segments, possibly resulting in severe errors.
Most existing robustness verification approaches for DNNs are focused on non-semantic perturbations.
arXiv Detail & Related papers (2023-01-27T18:54:00Z) - Raising the Bar on the Evaluation of Out-of-Distribution Detection [88.70479625837152]
We define 2 categories of OoD data using the subtly different concepts of perceptual/visual and semantic similarity to in-distribution (iD) data.
We propose a GAN based framework for generating OoD samples from each of these 2 categories, given an iD dataset.
We show that a) state-of-the-art OoD detection methods which perform exceedingly well on conventional benchmarks are significantly less robust to our proposed benchmark.
arXiv Detail & Related papers (2022-09-24T08:48:36Z) - Combating Mode Collapse in GANs via Manifold Entropy Estimation [70.06639443446545]
Generative Adversarial Networks (GANs) have shown compelling results in various tasks and applications.
We propose a novel training pipeline to address the mode collapse issue of GANs.
arXiv Detail & Related papers (2022-08-25T12:33:31Z) - Meta-learning for Out-of-Distribution Detection via Density Estimation
in Latent Space [40.58524521473793]
We propose a simple yet effective meta-learning method to detect OoD with small in-distribution data in a target task.
A neural network shared among all tasks is used to flexibly map instances in the original space to the latent space.
In experiments using six datasets, we demonstrate that the proposed method achieves better performance than existing meta-learning and OoD detection methods.
arXiv Detail & Related papers (2022-06-20T02:44:42Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z) - Generalized ODIN: Detecting Out-of-distribution Image without Learning
from Out-of-distribution Data [87.61504710345528]
We propose two strategies for freeing a neural network from tuning with OoD data, while improving its OoD detection performance.
We specifically propose to decompose confidence scoring as well as a modified input pre-processing method.
Our further analysis on a larger scale image dataset shows that the two types of distribution shifts, specifically semantic shift and non-semantic shift, present a significant difference.
arXiv Detail & Related papers (2020-02-26T04:18:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.