NODI: Out-Of-Distribution Detection with Noise from Diffusion
- URL: http://arxiv.org/abs/2401.08689v2
- Date: Thu, 18 Jan 2024 06:45:16 GMT
- Title: NODI: Out-Of-Distribution Detection with Noise from Diffusion
- Authors: Jingqiu Zhou, Aojun Zhou, Hongsheng Li
- Abstract summary: Out-of-distribution (OOD) detection is a crucial part of deploying machine learning models safely.
Previous methods compute the OOD scores with limited usage of the in-distribution dataset.
A $3.5%$ performance gain is achieved with the MAE-based image encoder.
- Score: 45.68745522344308
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Out-of-distribution (OOD) detection is a crucial part of deploying machine
learning models safely. It has been extensively studied with a plethora of
methods developed in the literature. This problem is tackled with an OOD score
computation, however, previous methods compute the OOD scores with limited
usage of the in-distribution dataset. For instance, the OOD scores are computed
with information from a small portion of the in-distribution data. Furthermore,
these methods encode images with a neural image encoder. The robustness of
these methods is rarely checked with respect to image encoders of different
training methods and architectures. In this work, we introduce the diffusion
process into the OOD task. The diffusion model integrates information on the
whole training set into the predicted noise vectors. What's more, we deduce a
closed-form solution for the noise vector (stable point). Then the noise vector
is converted into our OOD score, we test both the deep model predicted noise
vector and the closed-form noise vector on the OOD benchmarks \cite{openood}.
Our method outperforms previous OOD methods across all types of image encoders
(Table. \ref{main}). A $3.5\%$ performance gain is achieved with the MAE-based
image encoder. Moreover, we studied the robustness of OOD methods by applying
different types of image encoders. Some OOD methods failed to generalize well
when switching image encoders from ResNet to Vision Transformers, our method
performs exhibits good robustness with all the image encoders.
Related papers
- Evaluating Reliability in Medical DNNs: A Critical Analysis of Feature and Confidence-Based OOD Detection [2.9049649065453336]
OOD detection methods can be categorised as confidence-based (using the model's output layer for OOD detection) or feature-based (not using the output layer)
We show that OOD artefacts can boost a model's softmax confidence in its predictions, due to correlations in training data among other factors.
We also show that feature-based methods typically perform worse at distinguishing between inputs that lead to correct and incorrect predictions.
arXiv Detail & Related papers (2024-08-30T15:02:22Z) - A noisy elephant in the room: Is your out-of-distribution detector robust to label noise? [49.88894124047644]
We take a closer look at 20 state-of-the-art OOD detection methods.
We show that poor separation between incorrectly classified ID samples vs. OOD samples is an overlooked yet important limitation of existing methods.
arXiv Detail & Related papers (2024-04-02T09:40:22Z) - Improving Adversarial Robustness of Masked Autoencoders via Test-time
Frequency-domain Prompting [133.55037976429088]
We investigate the adversarial robustness of vision transformers equipped with BERT pretraining (e.g., BEiT, MAE)
A surprising observation is that MAE has significantly worse adversarial robustness than other BERT pretraining methods.
We propose a simple yet effective way to boost the adversarial robustness of MAE.
arXiv Detail & Related papers (2023-08-20T16:27:17Z) - Building One-class Detector for Anything: Open-vocabulary Zero-shot OOD
Detection Using Text-image Models [23.302018871162186]
We propose a novel one-class open-set OOD detector that leverages text-image pre-trained models in a zero-shot fashion.
Our approach is designed to detect anything not in-domain and offers the flexibility to detect a wide variety of OOD.
Our method shows superior performance over previous methods on all benchmarks.
arXiv Detail & Related papers (2023-05-26T18:58:56Z) - Masked Images Are Counterfactual Samples for Robust Fine-tuning [77.82348472169335]
Fine-tuning deep learning models can lead to a trade-off between in-distribution (ID) performance and out-of-distribution (OOD) robustness.
We propose a novel fine-tuning method, which uses masked images as counterfactual samples that help improve the robustness of the fine-tuning model.
arXiv Detail & Related papers (2023-03-06T11:51:28Z) - Denoising Masked AutoEncoders are Certifiable Robust Vision Learners [37.04863068273281]
We propose a new self-supervised method, which is called Denoising Masked AutoEncoders (DMAE)
DMAE corrupts each image by adding Gaussian noises to each pixel value and randomly masking several patches.
A Transformer-based encoder-decoder model is then trained to reconstruct the original image from the corrupted one.
arXiv Detail & Related papers (2022-10-10T12:37:59Z) - A Simple Test-Time Method for Out-of-Distribution Detection [45.11199798139358]
This paper proposes a simple Test-time Linear Training (ETLT) method for OOD detection.
We find that the probabilities of input images being out-of-distribution are surprisingly linearly correlated to the features extracted by neural networks.
We propose an online variant of the proposed method, which achieves promising performance and is more practical in real-world applications.
arXiv Detail & Related papers (2022-07-17T16:02:58Z) - Transformer-based out-of-distribution detection for clinically safe
segmentation [1.649654992058168]
In a clinical setting it is essential that deployed image processing systems do not make confidently wrong predictions.
In this work, we focus on image segmentation and evaluate several approaches to network uncertainty.
We propose performing full 3D OOD detection using a VQ-GAN to provide a compressed latent representation of the image and a transformer to estimate the data likelihood.
arXiv Detail & Related papers (2022-05-21T17:55:09Z) - Diffusion-Based Representation Learning [65.55681678004038]
We augment the denoising score matching framework to enable representation learning without any supervised signal.
In contrast, the introduced diffusion-based representation learning relies on a new formulation of the denoising score matching objective.
Using the same approach, we propose to learn an infinite-dimensional latent code that achieves improvements of state-of-the-art models on semi-supervised image classification.
arXiv Detail & Related papers (2021-05-29T09:26:02Z) - RAIN: A Simple Approach for Robust and Accurate Image Classification
Networks [156.09526491791772]
It has been shown that the majority of existing adversarial defense methods achieve robustness at the cost of sacrificing prediction accuracy.
This paper proposes a novel preprocessing framework, which we term Robust and Accurate Image classificatioN(RAIN)
RAIN applies randomization over inputs to break the ties between the model forward prediction path and the backward gradient path, thus improving the model robustness.
We conduct extensive experiments on the STL10 and ImageNet datasets to verify the effectiveness of RAIN against various types of adversarial attacks.
arXiv Detail & Related papers (2020-04-24T02:03:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.