EAT: Towards Long-Tailed Out-of-Distribution Detection
- URL: http://arxiv.org/abs/2312.08939v1
- Date: Thu, 14 Dec 2023 13:47:13 GMT
- Title: EAT: Towards Long-Tailed Out-of-Distribution Detection
- Authors: Tong Wei, Bo-Lin Wang, Min-Ling Zhang
- Abstract summary: This paper addresses the challenging task of long-tailed OOD detection.
The main difficulty lies in distinguishing OOD data from samples belonging to the tail classes.
We propose two simple ideas: (1) Expanding the in-distribution class space by introducing multiple abstention classes, and (2) Augmenting the context-limited tail classes by overlaying images onto the context-rich OOD data.
- Score: 55.380390767978554
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite recent advancements in out-of-distribution (OOD) detection, most
current studies assume a class-balanced in-distribution training dataset, which
is rarely the case in real-world scenarios. This paper addresses the
challenging task of long-tailed OOD detection, where the in-distribution data
follows a long-tailed class distribution. The main difficulty lies in
distinguishing OOD data from samples belonging to the tail classes, as the
ability of a classifier to detect OOD instances is not strongly correlated with
its accuracy on the in-distribution classes. To overcome this issue, we propose
two simple ideas: (1) Expanding the in-distribution class space by introducing
multiple abstention classes. This approach allows us to build a detector with
clear decision boundaries by training on OOD data using virtual labels. (2)
Augmenting the context-limited tail classes by overlaying images onto the
context-rich OOD data. This technique encourages the model to pay more
attention to the discriminative features of the tail classes. We provide a clue
for separating in-distribution and OOD data by analyzing gradient noise.
Through extensive experiments, we demonstrate that our method outperforms the
current state-of-the-art on various benchmark datasets. Moreover, our method
can be used as an add-on for existing long-tail learning approaches,
significantly enhancing their OOD detection performance. Code is available at:
https://github.com/Stomach-ache/Long-Tailed-OOD-Detection .
Related papers
- WeiPer: OOD Detection using Weight Perturbations of Class Projections [11.130659240045544]
We introduce perturbations of the class projections in the final fully connected layer which creates a richer representation of the input.
We achieve state-of-the-art OOD detection results across multiple benchmarks of the OpenOOD framework.
arXiv Detail & Related papers (2024-05-27T13:38:28Z) - ExCeL : Combined Extreme and Collective Logit Information for Enhancing
Out-of-Distribution Detection [9.689089164964484]
ExCeL combines extreme and collective information within the output layer for enhanced accuracy in OOD detection.
We show that ExCeL consistently is among the five top-performing methods out of twenty-one existing post-hoc baselines.
arXiv Detail & Related papers (2023-11-23T14:16:03Z) - From Global to Local: Multi-scale Out-of-distribution Detection [129.37607313927458]
Out-of-distribution (OOD) detection aims to detect "unknown" data whose labels have not been seen during the in-distribution (ID) training process.
Recent progress in representation learning gives rise to distance-based OOD detection.
We propose Multi-scale OOD DEtection (MODE), a first framework leveraging both global visual information and local region details.
arXiv Detail & Related papers (2023-08-20T11:56:25Z) - Using Semantic Information for Defining and Detecting OOD Inputs [3.9577682622066264]
Out-of-distribution (OOD) detection has received some attention recently.
We demonstrate that the current detectors inherit the biases in the training dataset.
This can render the current OOD detectors impermeable to inputs lying outside the training distribution but with the same semantic information.
We perform OOD detection on semantic information extracted from the training data of MNIST and COCO datasets.
arXiv Detail & Related papers (2023-02-21T21:31:20Z) - GOOD-D: On Unsupervised Graph Out-Of-Distribution Detection [67.90365841083951]
We develop a new graph contrastive learning framework GOOD-D for detecting OOD graphs without using any ground-truth labels.
GOOD-D is able to capture the latent ID patterns and accurately detect OOD graphs based on the semantic inconsistency in different granularities.
As a pioneering work in unsupervised graph-level OOD detection, we build a comprehensive benchmark to compare our proposed approach with different state-of-the-art methods.
arXiv Detail & Related papers (2022-11-08T12:41:58Z) - Out-of-Distribution Detection with Hilbert-Schmidt Independence
Optimization [114.43504951058796]
Outlier detection tasks have been playing a critical role in AI safety.
Deep neural network classifiers usually tend to incorrectly classify out-of-distribution (OOD) inputs into in-distribution classes with high confidence.
We propose an alternative probabilistic paradigm that is both practically useful and theoretically viable for the OOD detection tasks.
arXiv Detail & Related papers (2022-09-26T15:59:55Z) - Partial and Asymmetric Contrastive Learning for Out-of-Distribution
Detection in Long-Tailed Recognition [80.07843757970923]
We show that existing OOD detection methods suffer from significant performance degradation when the training set is long-tail distributed.
We propose Partial and Asymmetric Supervised Contrastive Learning (PASCL), which explicitly encourages the model to distinguish between tail-class in-distribution samples and OOD samples.
Our method outperforms previous state-of-the-art method by $1.29%$, $1.45%$, $0.69%$ anomaly detection false positive rate (FPR) and $3.24%$, $4.06%$, $7.89%$ in-distribution
arXiv Detail & Related papers (2022-07-04T01:53:07Z) - Metric Learning and Adaptive Boundary for Out-of-Domain Detection [0.9236074230806579]
We have designed an OOD detection algorithm independent of OOD data.
Our algorithm is based on a simple but efficient approach of combining metric learning with adaptive decision boundary.
Compared to other algorithms, we have found that our proposed algorithm has significantly improved OOD performance in a scenario with a lower number of classes.
arXiv Detail & Related papers (2022-04-22T17:54:55Z) - Training OOD Detectors in their Natural Habitats [31.565635192716712]
Out-of-distribution (OOD) detection is important for machine learning models deployed in the wild.
Recent methods use auxiliary outlier data to regularize the model for improved OOD detection.
We propose a novel framework that leverages wild mixture data -- that naturally consists of both ID and OOD samples.
arXiv Detail & Related papers (2022-02-07T15:38:39Z) - Triggering Failures: Out-Of-Distribution detection by learning from
local adversarial attacks in Semantic Segmentation [76.2621758731288]
We tackle the detection of out-of-distribution (OOD) objects in semantic segmentation.
Our main contribution is a new OOD detection architecture called ObsNet associated with a dedicated training scheme based on Local Adversarial Attacks (LAA)
We show it obtains top performances both in speed and accuracy when compared to ten recent methods of the literature on three different datasets.
arXiv Detail & Related papers (2021-08-03T17:09:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.