ATOM: Robustifying Out-of-distribution Detection Using Outlier Mining
- URL: http://arxiv.org/abs/2006.15207v4
- Date: Wed, 30 Jun 2021 02:33:11 GMT
- Title: ATOM: Robustifying Out-of-distribution Detection Using Outlier Mining
- Authors: Jiefeng Chen, Yixuan Li, Xi Wu, Yingyu Liang, Somesh Jha
- Abstract summary: Adrial Training with informative Outlier Mining improves robustness of OOD detection.
ATOM achieves state-of-the-art performance under a broad family of classic and adversarial OOD evaluation tasks.
- Score: 51.19164318924997
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Detecting out-of-distribution (OOD) inputs is critical for safely deploying
deep learning models in an open-world setting. However, existing OOD detection
solutions can be brittle in the open world, facing various types of adversarial
OOD inputs. While methods leveraging auxiliary OOD data have emerged, our
analysis on illuminative examples reveals a key insight that the majority of
auxiliary OOD examples may not meaningfully improve or even hurt the decision
boundary of the OOD detector, which is also observed in empirical results on
real data. In this paper, we provide a theoretically motivated method,
Adversarial Training with informative Outlier Mining (ATOM), which improves the
robustness of OOD detection. We show that, by mining informative auxiliary OOD
data, one can significantly improve OOD detection performance, and somewhat
surprisingly, generalize to unseen adversarial attacks. ATOM achieves
state-of-the-art performance under a broad family of classic and adversarial
OOD evaluation tasks. For example, on the CIFAR-10 in-distribution dataset,
ATOM reduces the FPR (at TPR 95%) by up to 57.99% under adversarial OOD inputs,
surpassing the previous best baseline by a large margin.
Related papers
- Learning to Augment Distributions for Out-of-Distribution Detection [49.12437300327712]
Open-world classification systems should discern out-of-distribution (OOD) data whose labels deviate from those of in-distribution (ID) cases.
We propose Distributional-Augmented OOD Learning (DAL) to alleviating the OOD distribution discrepancy.
arXiv Detail & Related papers (2023-11-03T09:19:33Z) - OpenOOD v1.5: Enhanced Benchmark for Out-of-Distribution Detection [82.85303878718207]
Out-of-Distribution (OOD) detection is critical for the reliable operation of open-world intelligent systems.
This paper presents OpenOOD v1.5, a significant improvement from its predecessor that ensures accurate, standardized, and user-friendly evaluation of OOD detection methodologies.
arXiv Detail & Related papers (2023-06-15T17:28:00Z) - Out-of-distribution Detection with Implicit Outlier Transformation [72.73711947366377]
Outlier exposure (OE) is powerful in out-of-distribution (OOD) detection.
We propose a novel OE-based approach that makes the model perform well for unseen OOD situations.
arXiv Detail & Related papers (2023-03-09T04:36:38Z) - Rethinking Out-of-distribution (OOD) Detection: Masked Image Modeling is
All You Need [52.88953913542445]
We find surprisingly that simply using reconstruction-based methods could boost the performance of OOD detection significantly.
We take Masked Image Modeling as a pretext task for our OOD detection framework (MOOD)
arXiv Detail & Related papers (2023-02-06T08:24:41Z) - MOOD: Multi-level Out-of-distribution Detection [13.207044902083057]
Out-of-distribution (OOD) detection is essential to prevent anomalous inputs from causing a model to fail during deployment.
We propose a novel framework, multi-level out-of-distribution detection MOOD, which exploits intermediate classifier outputs for dynamic and efficient OOD inference.
MOOD achieves up to 71.05% computational reduction in inference, while maintaining competitive OOD detection performance.
arXiv Detail & Related papers (2021-04-30T02:18:31Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.