Continual Unsupervised Out-of-Distribution Detection
- URL: http://arxiv.org/abs/2406.02327v1
- Date: Tue, 4 Jun 2024 13:57:34 GMT
- Title: Continual Unsupervised Out-of-Distribution Detection
- Authors: Lars Doorenbos, Raphael Sznitman, Pablo Márquez-Neila,
- Abstract summary: Current approaches assume that out-of-distribution samples originate from an unconcentrated distribution complementary to the training distribution.
We propose a method that starts from a U-OOD detector, which is agnostic to the OOD distribution, and slowly updates during deployment to account for the actual OOD distribution.
Our method uses a new U-OOD scoring function that combines the Mahalanobis distance with a nearest-neighbor approach.
- Score: 5.019613806273252
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning models excel when the data distribution during training aligns with testing data. Yet, their performance diminishes when faced with out-of-distribution (OOD) samples, leading to great interest in the field of OOD detection. Current approaches typically assume that OOD samples originate from an unconcentrated distribution complementary to the training distribution. While this assumption is appropriate in the traditional unsupervised OOD (U-OOD) setting, it proves inadequate when considering the place of deployment of the underlying deep learning model. To better reflect this real-world scenario, we introduce the novel setting of continual U-OOD detection. To tackle this new setting, we propose a method that starts from a U-OOD detector, which is agnostic to the OOD distribution, and slowly updates during deployment to account for the actual OOD distribution. Our method uses a new U-OOD scoring function that combines the Mahalanobis distance with a nearest-neighbor approach. Furthermore, we design a confidence-scaled few-shot OOD detector that outperforms previous methods. We show our method greatly improves upon strong baselines from related fields.
Related papers
- WeiPer: OOD Detection using Weight Perturbations of Class Projections [11.130659240045544]
We introduce perturbations of the class projections in the final fully connected layer which creates a richer representation of the input.
We achieve state-of-the-art OOD detection results across multiple benchmarks of the OpenOOD framework.
arXiv Detail & Related papers (2024-05-27T13:38:28Z) - EAT: Towards Long-Tailed Out-of-Distribution Detection [55.380390767978554]
This paper addresses the challenging task of long-tailed OOD detection.
The main difficulty lies in distinguishing OOD data from samples belonging to the tail classes.
We propose two simple ideas: (1) Expanding the in-distribution class space by introducing multiple abstention classes, and (2) Augmenting the context-limited tail classes by overlaying images onto the context-rich OOD data.
arXiv Detail & Related papers (2023-12-14T13:47:13Z) - Model-free Test Time Adaptation for Out-Of-Distribution Detection [62.49795078366206]
We propose a Non-Parametric Test Time textbfAdaptation framework for textbfDistribution textbfDetection (abbr)
abbr utilizes online test samples for model adaptation during testing, enhancing adaptability to changing data distributions.
We demonstrate the effectiveness of abbr through comprehensive experiments on multiple OOD detection benchmarks.
arXiv Detail & Related papers (2023-11-28T02:00:47Z) - Can Pre-trained Networks Detect Familiar Out-of-Distribution Data? [37.36999826208225]
We study the effect of PT-OOD on the OOD detection performance of pre-trained networks.
We find that the low linear separability of PT-OOD in the feature space heavily degrades the PT-OOD detection performance.
We propose a unique solution to large-scale pre-trained models: Leveraging powerful instance-by-instance discriminative representations of pre-trained models.
arXiv Detail & Related papers (2023-10-02T02:01:00Z) - Meta OOD Learning for Continuously Adaptive OOD Detection [38.28089655572316]
Out-of-distribution (OOD) detection is crucial to modern deep learning applications.
We propose a novel and more realistic setting called continuously adaptive out-of-distribution (CAOOD) detection.
We develop meta OOD learning (MOL) by designing a learning-to-adapt diagram such that a good OOD detection model is learned during the training process.
arXiv Detail & Related papers (2023-09-21T01:05:45Z) - Rethinking Out-of-distribution (OOD) Detection: Masked Image Modeling is
All You Need [52.88953913542445]
We find surprisingly that simply using reconstruction-based methods could boost the performance of OOD detection significantly.
We take Masked Image Modeling as a pretext task for our OOD detection framework (MOOD)
arXiv Detail & Related papers (2023-02-06T08:24:41Z) - Pseudo-OOD training for robust language models [78.15712542481859]
OOD detection is a key component of a reliable machine-learning model for any industry-scale application.
We propose POORE - POsthoc pseudo-Ood REgularization, that generates pseudo-OOD samples using in-distribution (IND) data.
We extensively evaluate our framework on three real-world dialogue systems, achieving new state-of-the-art in OOD detection.
arXiv Detail & Related papers (2022-10-17T14:32:02Z) - On the Impact of Spurious Correlation for Out-of-distribution Detection [14.186776881154127]
We present a new formalization and model the data shifts by taking into account both the invariant and environmental features.
Our results suggest that the detection performance is severely worsened when the correlation between spurious features and labels is increased in the training set.
arXiv Detail & Related papers (2021-09-12T23:58:17Z) - No True State-of-the-Art? OOD Detection Methods are Inconsistent across
Datasets [69.725266027309]
Out-of-distribution detection is an important component of reliable ML systems.
In this work, we show that none of these methods are inherently better at OOD detection than others on a standardized set of 16 pairs.
We also show that a method outperforming another on a certain (ID, OOD) pair may not do so in a low-data regime.
arXiv Detail & Related papers (2021-09-12T16:35:00Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.