MOOD: Multi-level Out-of-distribution Detection
- URL: http://arxiv.org/abs/2104.14726v1
- Date: Fri, 30 Apr 2021 02:18:31 GMT
- Title: MOOD: Multi-level Out-of-distribution Detection
- Authors: Ziqian Lin, Sreya Dutta Roy, Yixuan Li
- Abstract summary: Out-of-distribution (OOD) detection is essential to prevent anomalous inputs from causing a model to fail during deployment.
We propose a novel framework, multi-level out-of-distribution detection MOOD, which exploits intermediate classifier outputs for dynamic and efficient OOD inference.
MOOD achieves up to 71.05% computational reduction in inference, while maintaining competitive OOD detection performance.
- Score: 13.207044902083057
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Out-of-distribution (OOD) detection is essential to prevent anomalous inputs
from causing a model to fail during deployment. While improved OOD detection
methods have emerged, they often rely on the final layer outputs and require a
full feedforward pass for any given input. In this paper, we propose a novel
framework, multi-level out-of-distribution detection MOOD, which exploits
intermediate classifier outputs for dynamic and efficient OOD inference. We
explore and establish a direct relationship between the OOD data complexity and
optimal exit level, and show that easy OOD examples can be effectively detected
early without propagating to deeper layers. At each exit, the OOD examples can
be distinguished through our proposed adjusted energy score, which is both
empirically and theoretically suitable for networks with multiple classifiers.
We extensively evaluate MOOD across 10 OOD datasets spanning a wide range of
complexities. Experiments demonstrate that MOOD achieves up to 71.05%
computational reduction in inference, while maintaining competitive OOD
detection performance.
Related papers
- The Best of Both Worlds: On the Dilemma of Out-of-distribution Detection [75.65876949930258]
Out-of-distribution (OOD) detection is essential for model trustworthiness.
We show that the superior OOD detection performance of state-of-the-art methods is achieved by secretly sacrificing the OOD generalization ability.
arXiv Detail & Related papers (2024-10-12T07:02:04Z) - WeiPer: OOD Detection using Weight Perturbations of Class Projections [11.130659240045544]
We introduce perturbations of the class projections in the final fully connected layer which creates a richer representation of the input.
We achieve state-of-the-art OOD detection results across multiple benchmarks of the OpenOOD framework.
arXiv Detail & Related papers (2024-05-27T13:38:28Z) - Meta OOD Learning for Continuously Adaptive OOD Detection [38.28089655572316]
Out-of-distribution (OOD) detection is crucial to modern deep learning applications.
We propose a novel and more realistic setting called continuously adaptive out-of-distribution (CAOOD) detection.
We develop meta OOD learning (MOL) by designing a learning-to-adapt diagram such that a good OOD detection model is learned during the training process.
arXiv Detail & Related papers (2023-09-21T01:05:45Z) - General-Purpose Multi-Modal OOD Detection Framework [5.287829685181842]
Out-of-distribution (OOD) detection identifies test samples that differ from the training data, which is critical to ensuring the safety and reliability of machine learning (ML) systems.
We propose a general-purpose weakly-supervised OOD detection framework, called WOOD, that combines a binary classifier and a contrastive learning component.
We evaluate the proposed WOOD model on multiple real-world datasets, and the experimental results demonstrate that the WOOD model outperforms the state-of-the-art methods for multi-modal OOD detection.
arXiv Detail & Related papers (2023-07-24T18:50:49Z) - Out-of-distribution Detection with Implicit Outlier Transformation [72.73711947366377]
Outlier exposure (OE) is powerful in out-of-distribution (OOD) detection.
We propose a novel OE-based approach that makes the model perform well for unseen OOD situations.
arXiv Detail & Related papers (2023-03-09T04:36:38Z) - Rethinking Out-of-distribution (OOD) Detection: Masked Image Modeling is
All You Need [52.88953913542445]
We find surprisingly that simply using reconstruction-based methods could boost the performance of OOD detection significantly.
We take Masked Image Modeling as a pretext task for our OOD detection framework (MOOD)
arXiv Detail & Related papers (2023-02-06T08:24:41Z) - Pseudo-OOD training for robust language models [78.15712542481859]
OOD detection is a key component of a reliable machine-learning model for any industry-scale application.
We propose POORE - POsthoc pseudo-Ood REgularization, that generates pseudo-OOD samples using in-distribution (IND) data.
We extensively evaluate our framework on three real-world dialogue systems, achieving new state-of-the-art in OOD detection.
arXiv Detail & Related papers (2022-10-17T14:32:02Z) - ATOM: Robustifying Out-of-distribution Detection Using Outlier Mining [51.19164318924997]
Adrial Training with informative Outlier Mining improves robustness of OOD detection.
ATOM achieves state-of-the-art performance under a broad family of classic and adversarial OOD evaluation tasks.
arXiv Detail & Related papers (2020-06-26T20:58:05Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.