Probing Predictions on OOD Images via Nearest Categories
- URL: http://arxiv.org/abs/2011.08485v4
- Date: Thu, 24 Feb 2022 00:22:18 GMT
- Title: Probing Predictions on OOD Images via Nearest Categories
- Authors: Yao-Yuan Yang, Cyrus Rashtchian, Ruslan Salakhutdinov, Kamalika
Chaudhuri
- Abstract summary: We study out-of-distribution (OOD) prediction behavior of neural networks when they classify images from unseen classes or corrupted images.
We introduce a new measure, nearest category generalization (NCG), where we compute the fraction of OOD inputs that are classified with the same label as their nearest neighbor in the training set.
We find that robust networks have consistently higher NCG accuracy than natural training, even when the OOD data is much farther away than the robustness radius.
- Score: 97.055916832257
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study out-of-distribution (OOD) prediction behavior of neural networks
when they classify images from unseen classes or corrupted images. To probe the
OOD behavior, we introduce a new measure, nearest category generalization
(NCG), where we compute the fraction of OOD inputs that are classified with the
same label as their nearest neighbor in the training set. Our motivation stems
from understanding the prediction patterns of adversarially robust networks,
since previous work has identified unexpected consequences of training to be
robust to norm-bounded perturbations. We find that robust networks have
consistently higher NCG accuracy than natural training, even when the OOD data
is much farther away than the robustness radius. This implies that the local
regularization of robust training has a significant impact on the network's
decision regions. We replicate our findings using many datasets, comparing new
and existing training methods. Overall, adversarially robust networks resemble
a nearest neighbor classifier when it comes to OOD data. Code available at
https://github.com/yangarbiter/nearest-category-generalization.
Related papers
- EAT: Towards Long-Tailed Out-of-Distribution Detection [55.380390767978554]
This paper addresses the challenging task of long-tailed OOD detection.
The main difficulty lies in distinguishing OOD data from samples belonging to the tail classes.
We propose two simple ideas: (1) Expanding the in-distribution class space by introducing multiple abstention classes, and (2) Augmenting the context-limited tail classes by overlaying images onto the context-rich OOD data.
arXiv Detail & Related papers (2023-12-14T13:47:13Z) - Preventing Arbitrarily High Confidence on Far-Away Data in Point-Estimated Discriminative Neural Networks [28.97655735976179]
ReLU networks have been shown to almost always yield high confidence predictions when the test data are far away from the training set.
We overcome this problem by adding a term to the output of the neural network that corresponds to the logit of an extra class.
This technique provably prevents arbitrarily high confidence on far-away test data while maintaining a simple discriminative point-estimate training.
arXiv Detail & Related papers (2023-11-07T03:19:16Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Agreement-on-the-Line: Predicting the Performance of Neural Networks
under Distribution Shift [18.760716606922482]
We show a similar but surprising phenomenon also holds for the agreement between pairs of neural network classifiers.
Our prediction algorithm outperforms previous methods both in shifts where agreement-on-the-line holds and, surprisingly, when accuracy is not on the line.
arXiv Detail & Related papers (2022-06-27T07:50:47Z) - Metric Learning and Adaptive Boundary for Out-of-Domain Detection [0.9236074230806579]
We have designed an OOD detection algorithm independent of OOD data.
Our algorithm is based on a simple but efficient approach of combining metric learning with adaptive decision boundary.
Compared to other algorithms, we have found that our proposed algorithm has significantly improved OOD performance in a scenario with a lower number of classes.
arXiv Detail & Related papers (2022-04-22T17:54:55Z) - OODformer: Out-Of-Distribution Detection Transformer [15.17006322500865]
In real-world safety-critical applications, it is important to be aware if a new data point is OOD.
This paper proposes a first-of-its-kind OOD detection architecture named OODformer.
arXiv Detail & Related papers (2021-07-19T15:46:38Z) - Provably Robust Detection of Out-of-distribution Data (almost) for free [124.14121487542613]
Deep neural networks are known to produce highly overconfident predictions on out-of-distribution (OOD) data.
In this paper we propose a novel method where from first principles we combine a certifiable OOD detector with a standard classifier into an OOD aware classifier.
In this way we achieve the best of two worlds: certifiably adversarially robust OOD detection, even for OOD samples close to the in-distribution, without loss in prediction accuracy and close to state-of-the-art OOD detection performance for non-manipulated OOD data.
arXiv Detail & Related papers (2021-06-08T11:40:49Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.