Out-Of-Distribution Detection Is Not All You Need
- URL: http://arxiv.org/abs/2211.16158v1
- Date: Tue, 29 Nov 2022 12:40:06 GMT
- Title: Out-Of-Distribution Detection Is Not All You Need
- Authors: Joris Gu\'erin (IRD), Kevin Delmas, Raul Sena Ferreira (LAAS),
J\'er\'emie Guiochet (LAAS)
- Abstract summary: We argue that OOD detection is not a well-suited framework to design efficient runtime monitors.
We show that studying monitors in the OOD setting can be misleading.
We also show that removing erroneous training data samples helps to train better monitors.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The usage of deep neural networks in safety-critical systems is limited by
our ability to guarantee their correct behavior. Runtime monitors are
components aiming to identify unsafe predictions and discard them before they
can lead to catastrophic consequences. Several recent works on runtime
monitoring have focused on out-of-distribution (OOD) detection, i.e.,
identifying inputs that are different from the training data. In this work, we
argue that OOD detection is not a well-suited framework to design efficient
runtime monitors and that it is more relevant to evaluate monitors based on
their ability to discard incorrect predictions. We call this setting
out-ofmodel-scope detection and discuss the conceptual differences with OOD. We
also conduct extensive experiments on popular datasets from the literature to
show that studying monitors in the OOD setting can be misleading: 1. very good
OOD results can give a false impression of safety, 2. comparison under the OOD
setting does not allow identifying the best monitor to detect errors. Finally,
we also show that removing erroneous training data samples helps to train
better monitors.
Related papers
- Resultant: Incremental Effectiveness on Likelihood for Unsupervised Out-of-Distribution Detection [63.93728560200819]
Unsupervised out-of-distribution (U-OOD) detection is to identify data samples with a detector trained solely on unlabeled in-distribution (ID) data.
Recent studies have developed various detectors based on DGMs to move beyond likelihood.
We apply two techniques for each direction, specifically post-hoc prior and dataset entropy-mutual calibration.
Experimental results demonstrate that the Resultant could be a new state-of-the-art U-OOD detector.
arXiv Detail & Related papers (2024-09-05T02:58:13Z) - Can we Defend Against the Unknown? An Empirical Study About Threshold Selection for Neural Network Monitoring [6.8734954619801885]
runtime monitoring becomes essential to reject unsafe predictions during inference.
Various techniques have emerged to establish rejection scores that maximize the separability between the distributions of safe and unsafe predictions.
In real-world applications, an effective monitor also requires identifying a good threshold to transform these scores into meaningful binary decisions.
arXiv Detail & Related papers (2024-05-14T14:32:58Z) - A noisy elephant in the room: Is your out-of-distribution detector robust to label noise? [49.88894124047644]
We take a closer look at 20 state-of-the-art OOD detection methods.
We show that poor separation between incorrectly classified ID samples vs. OOD samples is an overlooked yet important limitation of existing methods.
arXiv Detail & Related papers (2024-04-02T09:40:22Z) - Can Pre-trained Networks Detect Familiar Out-of-Distribution Data? [37.36999826208225]
We study the effect of PT-OOD on the OOD detection performance of pre-trained networks.
We find that the low linear separability of PT-OOD in the feature space heavily degrades the PT-OOD detection performance.
We propose a unique solution to large-scale pre-trained models: Leveraging powerful instance-by-instance discriminative representations of pre-trained models.
arXiv Detail & Related papers (2023-10-02T02:01:00Z) - Conservative Prediction via Data-Driven Confidence Minimization [70.93946578046003]
In safety-critical applications of machine learning, it is often desirable for a model to be conservative.
We propose the Data-Driven Confidence Minimization framework, which minimizes confidence on an uncertainty dataset.
arXiv Detail & Related papers (2023-06-08T07:05:36Z) - AUTO: Adaptive Outlier Optimization for Online Test-Time OOD Detection [81.49353397201887]
Out-of-distribution (OOD) detection is crucial to deploying machine learning models in open-world applications.
We introduce a novel paradigm called test-time OOD detection, which utilizes unlabeled online data directly at test time to improve OOD detection performance.
We propose adaptive outlier optimization (AUTO), which consists of an in-out-aware filter, an ID memory bank, and a semantically-consistent objective.
arXiv Detail & Related papers (2023-03-22T02:28:54Z) - Using Semantic Information for Defining and Detecting OOD Inputs [3.9577682622066264]
Out-of-distribution (OOD) detection has received some attention recently.
We demonstrate that the current detectors inherit the biases in the training dataset.
This can render the current OOD detectors impermeable to inputs lying outside the training distribution but with the same semantic information.
We perform OOD detection on semantic information extracted from the training data of MNIST and COCO datasets.
arXiv Detail & Related papers (2023-02-21T21:31:20Z) - Rainproof: An Umbrella To Shield Text Generators From
Out-Of-Distribution Data [41.62897997865578]
Key ingredient to ensure safe system behaviour is Out-Of-Distribution detection.
Most methods rely on hidden features output by the encoder.
In this work, we focus on leveraging soft-probabilities in a black-box framework.
arXiv Detail & Related papers (2022-12-18T21:22:28Z) - Provably Robust Detection of Out-of-distribution Data (almost) for free [124.14121487542613]
Deep neural networks are known to produce highly overconfident predictions on out-of-distribution (OOD) data.
In this paper we propose a novel method where from first principles we combine a certifiable OOD detector with a standard classifier into an OOD aware classifier.
In this way we achieve the best of two worlds: certifiably adversarially robust OOD detection, even for OOD samples close to the in-distribution, without loss in prediction accuracy and close to state-of-the-art OOD detection performance for non-manipulated OOD data.
arXiv Detail & Related papers (2021-06-08T11:40:49Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.