Class-wise Thresholding for Detecting Out-of-Distribution Data
- URL: http://arxiv.org/abs/2110.15292v1
- Date: Thu, 28 Oct 2021 16:54:48 GMT
- Title: Class-wise Thresholding for Detecting Out-of-Distribution Data
- Authors: Matteo Guarrera, Baihong Jin, Tung-Wei Lin, Maria Zuluaga, Yuxin Chen,
Alberto Sangiovanni-Vincentelli
- Abstract summary: We consider the problem of detecting OoD(Out-of-Distribution) input data when using deep neural networks.
We propose a class-wise thresholding scheme that can apply to most existing OoD detection algorithms.
- Score: 6.5295089440496055
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We consider the problem of detecting OoD(Out-of-Distribution) input data when
using deep neural networks, and we propose a simple yet effective way to
improve the robustness of several popular OoD detection methods against label
shift. Our work is motivated by the observation that most existing OoD
detection algorithms consider all training/test data as a whole, regardless of
which class entry each input activates (inter-class differences). Through
extensive experimentation, we have found that such practice leads to a detector
whose performance is sensitive and vulnerable to label shift. To address this
issue, we propose a class-wise thresholding scheme that can apply to most
existing OoD detection algorithms and can maintain similar OoD detection
performance even in the presence of label shift in the test distribution.
Related papers
- Collaborative Feature-Logits Contrastive Learning for Open-Set Semi-Supervised Object Detection [75.02249869573994]
In open-set scenarios, the unlabeled dataset contains both in-distribution (ID) classes and out-of-distribution (OOD) classes.
Applying semi-supervised detectors in such settings can lead to misclassifying OOD class as ID classes.
We propose a simple yet effective method, termed Collaborative Feature-Logits Detector (CFL-Detector)
arXiv Detail & Related papers (2024-11-20T02:57:35Z) - Resultant: Incremental Effectiveness on Likelihood for Unsupervised Out-of-Distribution Detection [63.93728560200819]
Unsupervised out-of-distribution (U-OOD) detection is to identify data samples with a detector trained solely on unlabeled in-distribution (ID) data.
Recent studies have developed various detectors based on DGMs to move beyond likelihood.
We apply two techniques for each direction, specifically post-hoc prior and dataset entropy-mutual calibration.
Experimental results demonstrate that the Resultant could be a new state-of-the-art U-OOD detector.
arXiv Detail & Related papers (2024-09-05T02:58:13Z) - When and How Does In-Distribution Label Help Out-of-Distribution Detection? [38.874518492468965]
This paper offers a formal understanding to theoretically delineate the impact of ID labels on OOD detection.
We employ a graph-theoretic approach, rigorously analyzing the separability of ID data from OOD data in a closed-form manner.
We present empirical results on both simulated and real datasets, validating theoretical guarantees and reinforcing our insights.
arXiv Detail & Related papers (2024-05-28T22:34:53Z) - ImageNet-OOD: Deciphering Modern Out-of-Distribution Detection Algorithms [27.67837353597245]
Out-of-distribution (OOD) detection is notoriously ill-defined.
Recent works argue for a focus on failure detection.
Complex OOD detectors that were previously considered state-of-the-art now perform similarly to, or even worse than the simple maximum softmax probability baseline.
arXiv Detail & Related papers (2023-10-03T02:37:57Z) - On the Universal Adversarial Perturbations for Efficient Data-free
Adversarial Detection [55.73320979733527]
We propose a data-agnostic adversarial detection framework, which induces different responses between normal and adversarial samples to UAPs.
Experimental results show that our method achieves competitive detection performance on various text classification tasks.
arXiv Detail & Related papers (2023-06-27T02:54:07Z) - Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D
Object Detection [85.11649974840758]
3D object detection networks tend to be biased towards the data they are trained on.
We propose a single-frame approach for source-free, unsupervised domain adaptation of lidar-based 3D object detectors.
arXiv Detail & Related papers (2021-11-30T18:42:42Z) - Out-of-Distribution Detection using Outlier Detection Methods [0.0]
Out-of-distribution detection (OOD) deals with anomalous input to neural networks.
We use outlier detection algorithms to detect anomalous input as reliable as specialized methods from the field of OOD.
No neural network adaptation is required; detection is based on the model's softmax score.
arXiv Detail & Related papers (2021-08-18T16:05:53Z) - Self-Trained One-class Classification for Unsupervised Anomaly Detection [56.35424872736276]
Anomaly detection (AD) has various applications across domains, from manufacturing to healthcare.
In this work, we focus on unsupervised AD problems whose entire training data are unlabeled and may contain both normal and anomalous samples.
To tackle this problem, we build a robust one-class classification framework via data refinement.
We show that our method outperforms state-of-the-art one-class classification method by 6.3 AUC and 12.5 average precision.
arXiv Detail & Related papers (2021-06-11T01:36:08Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - Entropy Maximization and Meta Classification for Out-Of-Distribution
Detection in Semantic Segmentation [7.305019142196585]
"Out-of-distribution" (OoD) samples are crucial for many applications such as automated driving.
A natural baseline approach to OoD detection is to threshold on the pixel-wise softmax entropy.
We present a two-step procedure that significantly improves that approach.
arXiv Detail & Related papers (2020-12-09T11:01:06Z) - A General Framework For Detecting Anomalous Inputs to DNN Classifiers [37.79389209020564]
We propose an unsupervised anomaly detection framework based on the internal deep neural network layer representations.
We evaluate the proposed methods on well-known image classification datasets with strong adversarial attacks and OOD inputs.
arXiv Detail & Related papers (2020-07-29T22:57:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.