Learning to Augment Distributions for Out-of-Distribution Detection
- URL: http://arxiv.org/abs/2311.01796v2
- Date: Mon, 25 Dec 2023 07:48:47 GMT
- Title: Learning to Augment Distributions for Out-of-Distribution Detection
- Authors: Qizhou Wang, Zhen Fang, Yonggang Zhang, Feng Liu, Yixuan Li, Bo Han
- Abstract summary: Open-world classification systems should discern out-of-distribution (OOD) data whose labels deviate from those of in-distribution (ID) cases.
We propose Distributional-Augmented OOD Learning (DAL) to alleviating the OOD distribution discrepancy.
- Score: 49.12437300327712
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Open-world classification systems should discern out-of-distribution (OOD)
data whose labels deviate from those of in-distribution (ID) cases, motivating
recent studies in OOD detection. Advanced works, despite their promising
progress, may still fail in the open world, owing to the lack of knowledge
about unseen OOD data in advance. Although one can access auxiliary OOD data
(distinct from unseen ones) for model training, it remains to analyze how such
auxiliary data will work in the open world. To this end, we delve into such a
problem from a learning theory perspective, finding that the distribution
discrepancy between the auxiliary and the unseen real OOD data is the key to
affecting the open-world detection performance. Accordingly, we propose
Distributional-Augmented OOD Learning (DAL), alleviating the OOD distribution
discrepancy by crafting an OOD distribution set that contains all distributions
in a Wasserstein ball centered on the auxiliary OOD distribution. We justify
that the predictor trained over the worst OOD data in the ball can shrink the
OOD distribution discrepancy, thus improving the open-world detection
performance given only the auxiliary OOD data. We conduct extensive evaluations
across representative OOD detection setups, demonstrating the superiority of
our DAL over its advanced counterparts.
Related papers
- Out-of-Distribution Learning with Human Feedback [26.398598663165636]
This paper presents a novel framework for OOD learning with human feedback.
Our framework capitalizes on the freely available unlabeled data in the wild.
By exploiting human feedback, we enhance the robustness and reliability of machine learning models.
arXiv Detail & Related papers (2024-08-14T18:49:27Z) - Out-of-distribution Detection Learning with Unreliable
Out-of-distribution Sources [73.28967478098107]
Out-of-distribution (OOD) detection discerns OOD data where the predictor cannot make valid predictions as in-distribution (ID) data.
It is typically hard to collect real out-of-distribution (OOD) data for training a predictor capable of discerning OOD patterns.
We propose a data generation-based learning method named Auxiliary Task-based OOD Learning (ATOL) that can relieve the mistaken OOD generation.
arXiv Detail & Related papers (2023-11-06T16:26:52Z) - Out-of-distribution Detection with Implicit Outlier Transformation [72.73711947366377]
Outlier exposure (OE) is powerful in out-of-distribution (OOD) detection.
We propose a novel OE-based approach that makes the model perform well for unseen OOD situations.
arXiv Detail & Related papers (2023-03-09T04:36:38Z) - Unsupervised Evaluation of Out-of-distribution Detection: A Data-centric
Perspective [55.45202687256175]
Out-of-distribution (OOD) detection methods assume that they have test ground truths, i.e., whether individual test samples are in-distribution (IND) or OOD.
In this paper, we are the first to introduce the unsupervised evaluation problem in OOD detection.
We propose three methods to compute Gscore as an unsupervised indicator of OOD detection performance.
arXiv Detail & Related papers (2023-02-16T13:34:35Z) - Rethinking Out-of-distribution (OOD) Detection: Masked Image Modeling is
All You Need [52.88953913542445]
We find surprisingly that simply using reconstruction-based methods could boost the performance of OOD detection significantly.
We take Masked Image Modeling as a pretext task for our OOD detection framework (MOOD)
arXiv Detail & Related papers (2023-02-06T08:24:41Z) - Training OOD Detectors in their Natural Habitats [31.565635192716712]
Out-of-distribution (OOD) detection is important for machine learning models deployed in the wild.
Recent methods use auxiliary outlier data to regularize the model for improved OOD detection.
We propose a novel framework that leverages wild mixture data -- that naturally consists of both ID and OOD samples.
arXiv Detail & Related papers (2022-02-07T15:38:39Z) - ATOM: Robustifying Out-of-distribution Detection Using Outlier Mining [51.19164318924997]
Adrial Training with informative Outlier Mining improves robustness of OOD detection.
ATOM achieves state-of-the-art performance under a broad family of classic and adversarial OOD evaluation tasks.
arXiv Detail & Related papers (2020-06-26T20:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.