Meta OOD Learning for Continuously Adaptive OOD Detection
- URL: http://arxiv.org/abs/2309.11705v1
- Date: Thu, 21 Sep 2023 01:05:45 GMT
- Title: Meta OOD Learning for Continuously Adaptive OOD Detection
- Authors: Xinheng Wu, Jie Lu, Zhen Fang, Guangquan Zhang
- Abstract summary: Out-of-distribution (OOD) detection is crucial to modern deep learning applications.
We propose a novel and more realistic setting called continuously adaptive out-of-distribution (CAOOD) detection.
We develop meta OOD learning (MOL) by designing a learning-to-adapt diagram such that a good OOD detection model is learned during the training process.
- Score: 38.28089655572316
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Out-of-distribution (OOD) detection is crucial to modern deep learning
applications by identifying and alerting about the OOD samples that should not
be tested or used for making predictions. Current OOD detection methods have
made significant progress when in-distribution (ID) and OOD samples are drawn
from static distributions. However, this can be unrealistic when applied to
real-world systems which often undergo continuous variations and shifts in ID
and OOD distributions over time. Therefore, for an effective application in
real-world systems, the development of OOD detection methods that can adapt to
these dynamic and evolving distributions is essential. In this paper, we
propose a novel and more realistic setting called continuously adaptive
out-of-distribution (CAOOD) detection which targets on developing an OOD
detection model that enables dynamic and quick adaptation to a new arriving
distribution, with insufficient ID samples during deployment time. To address
CAOOD, we develop meta OOD learning (MOL) by designing a learning-to-adapt
diagram such that a good initialized OOD detection model is learned during the
training process. In the testing process, MOL ensures OOD detection performance
over shifting distributions by quickly adapting to new distributions with a few
adaptations. Extensive experiments on several OOD benchmarks endorse the
effectiveness of our method in preserving both ID classification accuracy and
OOD detection performance on continuously shifting distributions.
Related papers
- Continual Unsupervised Out-of-Distribution Detection [5.019613806273252]
Current approaches assume that out-of-distribution samples originate from an unconcentrated distribution complementary to the training distribution.
We propose a method that starts from a U-OOD detector, which is agnostic to the OOD distribution, and slowly updates during deployment to account for the actual OOD distribution.
Our method uses a new U-OOD scoring function that combines the Mahalanobis distance with a nearest-neighbor approach.
arXiv Detail & Related papers (2024-06-04T13:57:34Z) - Distilling the Unknown to Unveil Certainty [66.29929319664167]
Out-of-distribution (OOD) detection is essential in identifying test samples that deviate from the in-distribution (ID) data upon which a standard network is trained.
This paper introduces OOD knowledge distillation, a pioneering learning framework applicable whether or not training ID data is available.
arXiv Detail & Related papers (2023-11-14T08:05:02Z) - Out-of-distribution Detection Learning with Unreliable
Out-of-distribution Sources [73.28967478098107]
Out-of-distribution (OOD) detection discerns OOD data where the predictor cannot make valid predictions as in-distribution (ID) data.
It is typically hard to collect real out-of-distribution (OOD) data for training a predictor capable of discerning OOD patterns.
We propose a data generation-based learning method named Auxiliary Task-based OOD Learning (ATOL) that can relieve the mistaken OOD generation.
arXiv Detail & Related papers (2023-11-06T16:26:52Z) - General-Purpose Multi-Modal OOD Detection Framework [5.287829685181842]
Out-of-distribution (OOD) detection identifies test samples that differ from the training data, which is critical to ensuring the safety and reliability of machine learning (ML) systems.
We propose a general-purpose weakly-supervised OOD detection framework, called WOOD, that combines a binary classifier and a contrastive learning component.
We evaluate the proposed WOOD model on multiple real-world datasets, and the experimental results demonstrate that the WOOD model outperforms the state-of-the-art methods for multi-modal OOD detection.
arXiv Detail & Related papers (2023-07-24T18:50:49Z) - AUTO: Adaptive Outlier Optimization for Online Test-Time OOD Detection [81.49353397201887]
Out-of-distribution (OOD) detection is crucial to deploying machine learning models in open-world applications.
We introduce a novel paradigm called test-time OOD detection, which utilizes unlabeled online data directly at test time to improve OOD detection performance.
We propose adaptive outlier optimization (AUTO), which consists of an in-out-aware filter, an ID memory bank, and a semantically-consistent objective.
arXiv Detail & Related papers (2023-03-22T02:28:54Z) - Rethinking Out-of-distribution (OOD) Detection: Masked Image Modeling is
All You Need [52.88953913542445]
We find surprisingly that simply using reconstruction-based methods could boost the performance of OOD detection significantly.
We take Masked Image Modeling as a pretext task for our OOD detection framework (MOOD)
arXiv Detail & Related papers (2023-02-06T08:24:41Z) - Pseudo-OOD training for robust language models [78.15712542481859]
OOD detection is a key component of a reliable machine-learning model for any industry-scale application.
We propose POORE - POsthoc pseudo-Ood REgularization, that generates pseudo-OOD samples using in-distribution (IND) data.
We extensively evaluate our framework on three real-world dialogue systems, achieving new state-of-the-art in OOD detection.
arXiv Detail & Related papers (2022-10-17T14:32:02Z) - On the Impact of Spurious Correlation for Out-of-distribution Detection [14.186776881154127]
We present a new formalization and model the data shifts by taking into account both the invariant and environmental features.
Our results suggest that the detection performance is severely worsened when the correlation between spurious features and labels is increased in the training set.
arXiv Detail & Related papers (2021-09-12T23:58:17Z) - MOOD: Multi-level Out-of-distribution Detection [13.207044902083057]
Out-of-distribution (OOD) detection is essential to prevent anomalous inputs from causing a model to fail during deployment.
We propose a novel framework, multi-level out-of-distribution detection MOOD, which exploits intermediate classifier outputs for dynamic and efficient OOD inference.
MOOD achieves up to 71.05% computational reduction in inference, while maintaining competitive OOD detection performance.
arXiv Detail & Related papers (2021-04-30T02:18:31Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.