Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection
Capability
- URL: http://arxiv.org/abs/2306.03715v1
- Date: Tue, 6 Jun 2023 14:23:34 GMT
- Title: Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection
Capability
- Authors: Jianing Zhu, Hengzhuang Li, Jiangchao Yao, Tongliang Liu, Jianliang
Xu, Bo Han
- Abstract summary: Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications.
We propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.
Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them.
- Score: 70.72426887518517
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Out-of-distribution (OOD) detection is an indispensable aspect of secure AI
when deploying machine learning models in real-world applications. Previous
paradigms either explore better scoring functions or utilize the knowledge of
outliers to equip the models with the ability of OOD detection. However, few of
them pay attention to the intrinsic OOD detection capability of the given
model. In this work, we generally discover the existence of an intermediate
stage of a model trained on in-distribution (ID) data having higher OOD
detection performance than that of its final stage across different settings,
and further identify one critical data-level attribution to be learning with
the atypical samples. Based on such insights, we propose a novel method,
Unleashing Mask, which aims to restore the OOD discriminative capabilities of
the well-trained model with ID data. Our method utilizes a mask to figure out
the memorized atypical samples, and then finetune the model or prune it with
the introduced mask to forget them. Extensive experiments and analysis
demonstrate the effectiveness of our method. The code is available at:
https://github.com/tmlr-group/Unleashing-Mask.
Related papers
- Going Beyond Conventional OOD Detection [0.0]
Out-of-distribution (OOD) detection is critical to ensure the safe deployment of deep learning models in critical applications.
We present a unified Approach to Spurimatious, fine-grained, and Conventional OOD Detection (ASCOOD)
Our approach effectively mitigates the impact of spurious correlations and encourages capturing fine-grained attributes.
arXiv Detail & Related papers (2024-11-16T13:04:52Z) - Forte : Finding Outliers with Representation Typicality Estimation [0.14061979259370275]
Generative models can now produce synthetic data which is virtually indistinguishable from the real data used to train it.
Recent work on OOD detection has raised doubts that generative model likelihoods are optimal OOD detectors.
We introduce a novel approach that leverages representation learning, and informative summary statistics based on manifold estimation.
arXiv Detail & Related papers (2024-10-02T08:26:37Z) - Exploiting Diffusion Prior for Out-of-Distribution Detection [11.11093497717038]
Out-of-distribution (OOD) detection is crucial for deploying robust machine learning models.
We present a novel approach for OOD detection that leverages the generative ability of diffusion models and the powerful feature extraction capabilities of CLIP.
arXiv Detail & Related papers (2024-06-16T23:55:25Z) - Out-of-Distribution Detection with a Single Unconditional Diffusion Model [54.15132801131365]
Out-of-distribution (OOD) detection is a critical task in machine learning that seeks to identify abnormal samples.
Traditionally, unsupervised methods utilize a deep generative model for OOD detection.
This paper explores whether a single model can perform OOD detection across diverse tasks.
arXiv Detail & Related papers (2024-05-20T08:54:03Z) - From Global to Local: Multi-scale Out-of-distribution Detection [129.37607313927458]
Out-of-distribution (OOD) detection aims to detect "unknown" data whose labels have not been seen during the in-distribution (ID) training process.
Recent progress in representation learning gives rise to distance-based OOD detection.
We propose Multi-scale OOD DEtection (MODE), a first framework leveraging both global visual information and local region details.
arXiv Detail & Related papers (2023-08-20T11:56:25Z) - Rethinking Out-of-distribution (OOD) Detection: Masked Image Modeling is
All You Need [52.88953913542445]
We find surprisingly that simply using reconstruction-based methods could boost the performance of OOD detection significantly.
We take Masked Image Modeling as a pretext task for our OOD detection framework (MOOD)
arXiv Detail & Related papers (2023-02-06T08:24:41Z) - Watermarking for Out-of-distribution Detection [76.20630986010114]
Out-of-distribution (OOD) detection aims to identify OOD data based on representations extracted from well-trained deep models.
We propose a general methodology named watermarking in this paper.
We learn a unified pattern that is superimposed onto features of original data, and the model's detection capability is largely boosted after watermarking.
arXiv Detail & Related papers (2022-10-27T06:12:32Z) - Training OOD Detectors in their Natural Habitats [31.565635192716712]
Out-of-distribution (OOD) detection is important for machine learning models deployed in the wild.
Recent methods use auxiliary outlier data to regularize the model for improved OOD detection.
We propose a novel framework that leverages wild mixture data -- that naturally consists of both ID and OOD samples.
arXiv Detail & Related papers (2022-02-07T15:38:39Z) - Robust Out-of-Distribution Detection on Deep Probabilistic Generative
Models [0.06372261626436676]
Out-of-distribution (OOD) detection is an important task in machine learning systems.
Deep probabilistic generative models facilitate OOD detection by estimating the likelihood of a data sample.
We propose a new detection metric that operates without outlier exposure.
arXiv Detail & Related papers (2021-06-15T06:36:10Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.