Diversified Outlier Exposure for Out-of-Distribution Detection via
Informative Extrapolation
- URL: http://arxiv.org/abs/2310.13923v2
- Date: Thu, 26 Oct 2023 06:50:44 GMT
- Title: Diversified Outlier Exposure for Out-of-Distribution Detection via
Informative Extrapolation
- Authors: Jianing Zhu, Geng Yu, Jiangchao Yao, Tongliang Liu, Gang Niu, Masashi
Sugiyama, Bo Han
- Abstract summary: Out-of-distribution (OOD) detection is important for deploying reliable machine learning models on real-world applications.
Recent advances in outlier exposure have shown promising results on OOD detection via fine-tuning model with informatively sampled auxiliary outliers.
We propose a novel framework, namely, Diversified Outlier Exposure (DivOE), for effective OOD detection via informative extrapolation based on the given auxiliary outliers.
- Score: 110.34982764201689
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Out-of-distribution (OOD) detection is important for deploying reliable
machine learning models on real-world applications. Recent advances in outlier
exposure have shown promising results on OOD detection via fine-tuning model
with informatively sampled auxiliary outliers. However, previous methods assume
that the collected outliers can be sufficiently large and representative to
cover the boundary between ID and OOD data, which might be impractical and
challenging. In this work, we propose a novel framework, namely, Diversified
Outlier Exposure (DivOE), for effective OOD detection via informative
extrapolation based on the given auxiliary outliers. Specifically, DivOE
introduces a new learning objective, which diversifies the auxiliary
distribution by explicitly synthesizing more informative outliers for
extrapolation during training. It leverages a multi-step optimization method to
generate novel outliers beyond the original ones, which is compatible with many
variants of outlier exposure. Extensive experiments and analyses have been
conducted to characterize and demonstrate the effectiveness of the proposed
DivOE. The code is publicly available at: https://github.com/tmlr-group/DivOE.
Related papers
- OAML: Outlier Aware Metric Learning for OOD Detection Enhancement [5.357756138014614]
Out-of-distribution (OOD) detection methods have been developed to identify objects that a model has not seen during training.
The Outlier Exposure (OE) methods use auxiliary datasets to train OOD detectors directly.
We propose the Outlier Aware Metric Learning (OAML) framework to tackle the collection and learning of representative OOD samples.
arXiv Detail & Related papers (2024-06-24T11:01:43Z) - Envisioning Outlier Exposure by Large Language Models for Out-of-Distribution Detection [71.93411099797308]
Out-of-distribution (OOD) samples are crucial when deploying machine learning models in open-world scenarios.
We propose to tackle this constraint by leveraging the expert knowledge and reasoning capability of large language models (LLM) to potential Outlier Exposure, termed EOE.
EOE can be generalized to different tasks, including far, near, and fine-language OOD detection.
EOE achieves state-of-the-art performance across different OOD tasks and can be effectively scaled to the ImageNet-1K dataset.
arXiv Detail & Related papers (2024-06-02T17:09:48Z) - Deep Metric Learning-Based Out-of-Distribution Detection with Synthetic Outlier Exposure [0.0]
We propose a label-mixup approach to generate synthetic OOD data using Denoising Diffusion Probabilistic Models (DDPMs)
In the experiments, we found that metric learning-based loss functions perform better than the softmax.
Our approach outperforms strong baselines in conventional OOD detection metrics.
arXiv Detail & Related papers (2024-05-01T16:58:22Z) - EAT: Towards Long-Tailed Out-of-Distribution Detection [55.380390767978554]
This paper addresses the challenging task of long-tailed OOD detection.
The main difficulty lies in distinguishing OOD data from samples belonging to the tail classes.
We propose two simple ideas: (1) Expanding the in-distribution class space by introducing multiple abstention classes, and (2) Augmenting the context-limited tail classes by overlaying images onto the context-rich OOD data.
arXiv Detail & Related papers (2023-12-14T13:47:13Z) - Leveraging Diffusion Disentangled Representations to Mitigate Shortcuts
in Underspecified Visual Tasks [92.32670915472099]
We propose an ensemble diversification framework exploiting the generation of synthetic counterfactuals using Diffusion Probabilistic Models (DPMs)
We show that diffusion-guided diversification can lead models to avert attention from shortcut cues, achieving ensemble diversity performance comparable to previous methods requiring additional data collection.
arXiv Detail & Related papers (2023-10-03T17:37:52Z) - Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection
Capability [70.72426887518517]
Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications.
We propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.
Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them.
arXiv Detail & Related papers (2023-06-06T14:23:34Z) - On the Usefulness of Deep Ensemble Diversity for Out-of-Distribution
Detection [7.221206118679026]
The ability to detect Out-of-Distribution (OOD) data is important in safety-critical applications of deep learning.
An existing intuition in the literature is that the diversity of Deep Ensemble predictions indicates distributional shift.
We show experimentally that this intuition is not valid on ImageNet-scale OOD detection.
arXiv Detail & Related papers (2022-07-15T15:02:38Z) - Training OOD Detectors in their Natural Habitats [31.565635192716712]
Out-of-distribution (OOD) detection is important for machine learning models deployed in the wild.
Recent methods use auxiliary outlier data to regularize the model for improved OOD detection.
We propose a novel framework that leverages wild mixture data -- that naturally consists of both ID and OOD samples.
arXiv Detail & Related papers (2022-02-07T15:38:39Z) - Robust Out-of-Distribution Detection on Deep Probabilistic Generative
Models [0.06372261626436676]
Out-of-distribution (OOD) detection is an important task in machine learning systems.
Deep probabilistic generative models facilitate OOD detection by estimating the likelihood of a data sample.
We propose a new detection metric that operates without outlier exposure.
arXiv Detail & Related papers (2021-06-15T06:36:10Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.