From Global to Local: Multi-scale Out-of-distribution Detection
- URL: http://arxiv.org/abs/2308.10239v1
- Date: Sun, 20 Aug 2023 11:56:25 GMT
- Title: From Global to Local: Multi-scale Out-of-distribution Detection
- Authors: Ji Zhang, Lianli Gao, Bingguang Hao, Hao Huang, Jingkuan Song, Hengtao
Shen
- Abstract summary: Out-of-distribution (OOD) detection aims to detect "unknown" data whose labels have not been seen during the in-distribution (ID) training process.
Recent progress in representation learning gives rise to distance-based OOD detection.
We propose Multi-scale OOD DEtection (MODE), a first framework leveraging both global visual information and local region details.
- Score: 129.37607313927458
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Out-of-distribution (OOD) detection aims to detect "unknown" data whose
labels have not been seen during the in-distribution (ID) training process.
Recent progress in representation learning gives rise to distance-based OOD
detection that recognizes inputs as ID/OOD according to their relative
distances to the training data of ID classes. Previous approaches calculate
pairwise distances relying only on global image representations, which can be
sub-optimal as the inevitable background clutter and intra-class variation may
drive image-level representations from the same ID class far apart in a given
representation space. In this work, we overcome this challenge by proposing
Multi-scale OOD DEtection (MODE), a first framework leveraging both global
visual information and local region details of images to maximally benefit OOD
detection. Specifically, we first find that existing models pretrained by
off-the-shelf cross-entropy or contrastive losses are incompetent to capture
valuable local representations for MODE, due to the scale-discrepancy between
the ID training and OOD detection processes. To mitigate this issue and
encourage locally discriminative representations in ID training, we propose
Attention-based Local PropAgation (ALPA), a trainable objective that exploits a
cross-attention mechanism to align and highlight the local regions of the
target objects for pairwise examples. During test-time OOD detection, a
Cross-Scale Decision (CSD) function is further devised on the most
discriminative multi-scale representations to distinguish ID/OOD data more
faithfully. We demonstrate the effectiveness and flexibility of MODE on several
benchmarks -- on average, MODE outperforms the previous state-of-the-art by up
to 19.24% in FPR, 2.77% in AUROC. Code is available at
https://github.com/JimZAI/MODE-OOD.
Related papers
- What If the Input is Expanded in OOD Detection? [77.37433624869857]
Out-of-distribution (OOD) detection aims to identify OOD inputs from unknown classes.
Various scoring functions are proposed to distinguish it from in-distribution (ID) data.
We introduce a novel perspective, i.e., employing different common corruptions on the input space.
arXiv Detail & Related papers (2024-10-24T06:47:28Z) - Representation Norm Amplification for Out-of-Distribution Detection in Long-Tail Learning [10.696635172502141]
We introduce our method, called textitRepresentation Norm Amplification (RNA), which solves the problem of detecting out-of-distribution samples.
Experiments show that RNA achieves superior performance in both OOD detection and classification compared to the state-of-the-art methods.
arXiv Detail & Related papers (2024-08-20T09:27:07Z) - Out-of-Distribution Detection Using Peer-Class Generated by Large Language Model [0.0]
Out-of-distribution (OOD) detection is a critical task to ensure the reliability and security of machine learning models.
In this paper, a novel method called ODPC is proposed, in which specific prompts to generate OOD peer classes of ID semantics are designed by a large language model.
Experiments on five benchmark datasets show that the method we propose can yield state-of-the-art results.
arXiv Detail & Related papers (2024-03-20T06:04:05Z) - EAT: Towards Long-Tailed Out-of-Distribution Detection [55.380390767978554]
This paper addresses the challenging task of long-tailed OOD detection.
The main difficulty lies in distinguishing OOD data from samples belonging to the tail classes.
We propose two simple ideas: (1) Expanding the in-distribution class space by introducing multiple abstention classes, and (2) Augmenting the context-limited tail classes by overlaying images onto the context-rich OOD data.
arXiv Detail & Related papers (2023-12-14T13:47:13Z) - Generalized Open-World Semi-Supervised Object Detection [22.058195650206944]
We introduce an ensemble-based OOD Explorer for detection and classification, and an adaptable semi-supervised object detection framework.
We demonstrate that our method performs competitively against state-of-the-art OOD detection algorithms and also significantly boosts the semi-supervised learning performance for both ID and OOD classes.
arXiv Detail & Related papers (2023-07-28T17:59:03Z) - Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection
Capability [70.72426887518517]
Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications.
We propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.
Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them.
arXiv Detail & Related papers (2023-06-06T14:23:34Z) - LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning [37.36999826208225]
We present a novel vision-language prompt learning approach for few-shot out-of-distribution (OOD) detection.
LoCoOp performs OOD regularization that utilizes the portions of CLIP local features as OOD features during training.
LoCoOp outperforms existing zero-shot and fully supervised detection methods.
arXiv Detail & Related papers (2023-06-02T06:33:08Z) - Breaking Down Out-of-Distribution Detection: Many Methods Based on OOD
Training Data Estimate a Combination of the Same Core Quantities [104.02531442035483]
The goal of this paper is to recognize common objectives as well as to identify the implicit scoring functions of different OOD detection methods.
We show that binary discrimination between in- and (different) out-distributions is equivalent to several distinct formulations of the OOD detection problem.
We also show that the confidence loss which is used by Outlier Exposure has an implicit scoring function which differs in a non-trivial fashion from the theoretically optimal scoring function.
arXiv Detail & Related papers (2022-06-20T16:32:49Z) - Triggering Failures: Out-Of-Distribution detection by learning from
local adversarial attacks in Semantic Segmentation [76.2621758731288]
We tackle the detection of out-of-distribution (OOD) objects in semantic segmentation.
Our main contribution is a new OOD detection architecture called ObsNet associated with a dedicated training scheme based on Local Adversarial Attacks (LAA)
We show it obtains top performances both in speed and accuracy when compared to ten recent methods of the literature on three different datasets.
arXiv Detail & Related papers (2021-08-03T17:09:56Z) - OODformer: Out-Of-Distribution Detection Transformer [15.17006322500865]
In real-world safety-critical applications, it is important to be aware if a new data point is OOD.
This paper proposes a first-of-its-kind OOD detection architecture named OODformer.
arXiv Detail & Related papers (2021-07-19T15:46:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.