OpenOOD v1.5: Enhanced Benchmark for Out-of-Distribution Detection
- URL: http://arxiv.org/abs/2306.09301v2
- Date: Sat, 17 Jun 2023 01:14:56 GMT
- Title: OpenOOD v1.5: Enhanced Benchmark for Out-of-Distribution Detection
- Authors: Jingyang Zhang, Jingkang Yang, Pengyun Wang, Haoqi Wang, Yueqian Lin,
Haoran Zhang, Yiyou Sun, Xuefeng Du, Kaiyang Zhou, Wayne Zhang, Yixuan Li,
Ziwei Liu, Yiran Chen, Hai Li
- Abstract summary: Out-of-Distribution (OOD) detection is critical for the reliable operation of open-world intelligent systems.
This paper presents OpenOOD v1.5, a significant improvement from its predecessor that ensures accurate, standardized, and user-friendly evaluation of OOD detection methodologies.
- Score: 81.25718226042832
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Out-of-Distribution (OOD) detection is critical for the reliable operation of
open-world intelligent systems. Despite the emergence of an increasing number
of OOD detection methods, the evaluation inconsistencies present challenges for
tracking the progress in this field. OpenOOD v1 initiated the unification of
the OOD detection evaluation but faced limitations in scalability and
usability. In response, this paper presents OpenOOD v1.5, a significant
improvement from its predecessor that ensures accurate, standardized, and
user-friendly evaluation of OOD detection methodologies. Notably, OpenOOD v1.5
extends its evaluation capabilities to large-scale datasets such as ImageNet,
investigates full-spectrum OOD detection which is important yet underexplored,
and introduces new features including an online leaderboard and an easy-to-use
evaluator. This work also contributes in-depth analysis and insights derived
from comprehensive experimental results, thereby enriching the knowledge pool
of OOD detection methodologies. With these enhancements, OpenOOD v1.5 aims to
drive advancements and offer a more robust and comprehensive evaluation
benchmark for OOD detection research.
Related papers
- Scaling for Training Time and Post-hoc Out-of-distribution Detection
Enhancement [41.650761556671775]
In this paper, we offer insights and analyses of recent state-of-the-art out-of-distribution (OOD) detection methods.
We demonstrate that activation pruning has a detrimental effect on OOD detection, while activation scaling enhances it.
We achieve AUROC scores of +1.85% for near-OOD and +0.74% for far-OOD datasets on the OpenOOD v1.5 ImageNet-1K benchmark.
arXiv Detail & Related papers (2023-09-30T02:10:54Z) - AUTO: Adaptive Outlier Optimization for Online Test-Time OOD Detection [81.49353397201887]
Out-of-distribution (OOD) detection is crucial to deploying machine learning models in open-world applications.
We introduce a novel paradigm called test-time OOD detection, which utilizes unlabeled online data directly at test time to improve OOD detection performance.
We propose adaptive outlier optimization (AUTO), which consists of an in-out-aware filter, an ID memory bank, and a semantically-consistent objective.
arXiv Detail & Related papers (2023-03-22T02:28:54Z) - Unsupervised Evaluation of Out-of-distribution Detection: A Data-centric
Perspective [55.45202687256175]
Out-of-distribution (OOD) detection methods assume that they have test ground truths, i.e., whether individual test samples are in-distribution (IND) or OOD.
In this paper, we are the first to introduce the unsupervised evaluation problem in OOD detection.
We propose three methods to compute Gscore as an unsupervised indicator of OOD detection performance.
arXiv Detail & Related papers (2023-02-16T13:34:35Z) - Rethinking Out-of-distribution (OOD) Detection: Masked Image Modeling is
All You Need [52.88953913542445]
We find surprisingly that simply using reconstruction-based methods could boost the performance of OOD detection significantly.
We take Masked Image Modeling as a pretext task for our OOD detection framework (MOOD)
arXiv Detail & Related papers (2023-02-06T08:24:41Z) - ATOM: Robustifying Out-of-distribution Detection Using Outlier Mining [51.19164318924997]
Adrial Training with informative Outlier Mining improves robustness of OOD detection.
ATOM achieves state-of-the-art performance under a broad family of classic and adversarial OOD evaluation tasks.
arXiv Detail & Related papers (2020-06-26T20:58:05Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.