Out-of-distribution Detection with Implicit Outlier Transformation
- URL: http://arxiv.org/abs/2303.05033v1
- Date: Thu, 9 Mar 2023 04:36:38 GMT
- Title: Out-of-distribution Detection with Implicit Outlier Transformation
- Authors: Qizhou Wang, Junjie Ye, Feng Liu, Quanyu Dai, Marcus Kalander,
Tongliang Liu, Jianye Hao, Bo Han
- Abstract summary: Outlier exposure (OE) is powerful in out-of-distribution (OOD) detection.
We propose a novel OE-based approach that makes the model perform well for unseen OOD situations.
- Score: 72.73711947366377
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Outlier exposure (OE) is powerful in out-of-distribution (OOD) detection,
enhancing detection capability via model fine-tuning with surrogate OOD data.
However, surrogate data typically deviate from test OOD data. Thus, the
performance of OE, when facing unseen OOD data, can be weakened. To address
this issue, we propose a novel OE-based approach that makes the model perform
well for unseen OOD situations, even for unseen OOD cases. It leads to a
min-max learning scheme -- searching to synthesize OOD data that leads to worst
judgments and learning from such OOD data for uniform performance in OOD
detection. In our realization, these worst OOD data are synthesized by
transforming original surrogate ones. Specifically, the associated transform
functions are learned implicitly based on our novel insight that model
perturbation leads to data transformation. Our methodology offers an efficient
way of synthesizing OOD data, which can further benefit the detection model,
besides the surrogate OOD data. We conduct extensive experiments under various
OOD detection setups, demonstrating the effectiveness of our method against its
advanced counterparts.
Related papers
- Can OOD Object Detectors Learn from Foundation Models? [56.03404530594071]
Out-of-distribution (OOD) object detection is a challenging task due to the absence of open-set OOD data.
Inspired by recent advancements in text-to-image generative models, we study the potential of generative models trained on large-scale open-set data to synthesize OOD samples.
We introduce SyncOOD, a simple data curation method that capitalizes on the capabilities of large foundation models.
arXiv Detail & Related papers (2024-09-08T17:28:22Z) - Out-of-distribution Detection Learning with Unreliable
Out-of-distribution Sources [73.28967478098107]
Out-of-distribution (OOD) detection discerns OOD data where the predictor cannot make valid predictions as in-distribution (ID) data.
It is typically hard to collect real out-of-distribution (OOD) data for training a predictor capable of discerning OOD patterns.
We propose a data generation-based learning method named Auxiliary Task-based OOD Learning (ATOL) that can relieve the mistaken OOD generation.
arXiv Detail & Related papers (2023-11-06T16:26:52Z) - Can Pre-trained Networks Detect Familiar Out-of-Distribution Data? [37.36999826208225]
We study the effect of PT-OOD on the OOD detection performance of pre-trained networks.
We find that the low linear separability of PT-OOD in the feature space heavily degrades the PT-OOD detection performance.
We propose a unique solution to large-scale pre-trained models: Leveraging powerful instance-by-instance discriminative representations of pre-trained models.
arXiv Detail & Related papers (2023-10-02T02:01:00Z) - AUTO: Adaptive Outlier Optimization for Online Test-Time OOD Detection [81.49353397201887]
Out-of-distribution (OOD) detection is crucial to deploying machine learning models in open-world applications.
We introduce a novel paradigm called test-time OOD detection, which utilizes unlabeled online data directly at test time to improve OOD detection performance.
We propose adaptive outlier optimization (AUTO), which consists of an in-out-aware filter, an ID memory bank, and a semantically-consistent objective.
arXiv Detail & Related papers (2023-03-22T02:28:54Z) - Harnessing Out-Of-Distribution Examples via Augmenting Content and Style [93.21258201360484]
Machine learning models are vulnerable to Out-Of-Distribution (OOD) examples.
This paper proposes a HOOD method that can leverage the content and style from each image instance to identify benign and malign OOD data.
Thanks to the proposed novel disentanglement and data augmentation techniques, HOOD can effectively deal with OOD examples in unknown and open environments.
arXiv Detail & Related papers (2022-07-07T08:48:59Z) - ATOM: Robustifying Out-of-distribution Detection Using Outlier Mining [51.19164318924997]
Adrial Training with informative Outlier Mining improves robustness of OOD detection.
ATOM achieves state-of-the-art performance under a broad family of classic and adversarial OOD evaluation tasks.
arXiv Detail & Related papers (2020-06-26T20:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.