Diffusion Denoising Process for Perceptron Bias in Out-of-distribution
Detection
- URL: http://arxiv.org/abs/2211.11255v2
- Date: Sun, 4 Jun 2023 02:06:29 GMT
- Title: Diffusion Denoising Process for Perceptron Bias in Out-of-distribution
Detection
- Authors: Luping Liu and Yi Ren and Xize Cheng and Rongjie Huang and Chongxuan
Li and Zhou Zhao
- Abstract summary: We introduce a new perceptron bias assumption that suggests discriminator models are more sensitive to certain features of the input, leading to the overconfidence problem.
We demonstrate that the diffusion denoising process (DDP) of DMs serves as a novel form of asymmetric, which is well-suited to enhance the input and mitigate the overconfidence problem.
Our experiments on CIFAR10, CIFAR100, and ImageNet show that our method outperforms SOTA approaches.
- Score: 67.49587673594276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Out-of-distribution (OOD) detection is a crucial task for ensuring the
reliability and safety of deep learning. Currently, discriminator models
outperform other methods in this regard. However, the feature extraction
process used by discriminator models suffers from the loss of critical
information, leaving room for bad cases and malicious attacks. In this paper,
we introduce a new perceptron bias assumption that suggests discriminator
models are more sensitive to certain features of the input, leading to the
overconfidence problem. To address this issue, we propose a novel framework
that combines discriminator and generation models and integrates diffusion
models (DMs) into OOD detection. We demonstrate that the diffusion denoising
process (DDP) of DMs serves as a novel form of asymmetric interpolation, which
is well-suited to enhance the input and mitigate the overconfidence problem.
The discriminator model features of OOD data exhibit sharp changes under DDP,
and we utilize the norm of this change as the indicator score. Our experiments
on CIFAR10, CIFAR100, and ImageNet show that our method outperforms SOTA
approaches. Notably, for the challenging InD ImageNet and OOD species datasets,
our method achieves an AUROC of 85.7, surpassing the previous SOTA method's
score of 77.4. Our implementation is available at
\url{https://github.com/luping-liu/DiffOOD}.
Related papers
- Lazy Layers to Make Fine-Tuned Diffusion Models More Traceable [70.77600345240867]
A novel arbitrary-in-arbitrary-out (AIAO) strategy makes watermarks resilient to fine-tuning-based removal.
Unlike the existing methods of designing a backdoor for the input/output space of diffusion models, in our method, we propose to embed the backdoor into the feature space of sampled subpaths.
Our empirical studies on the MS-COCO, AFHQ, LSUN, CUB-200, and DreamBooth datasets confirm the robustness of AIAO.
arXiv Detail & Related papers (2024-05-01T12:03:39Z) - Mitigating Exposure Bias in Discriminator Guided Diffusion Models [4.5349436061325425]
We propose SEDM-G++, which incorporates a modified sampling approach, combining Discriminator Guidance and Epsilon Scaling.
Our proposed approach outperforms the current state-of-the-art, by achieving an FID score of 1.73 on the unconditional CIFAR-10 dataset.
arXiv Detail & Related papers (2023-11-18T20:49:50Z) - Steerable Conditional Diffusion for Out-of-Distribution Adaptation in
Imaging Inverse Problems [78.76955228709241]
We introduce a novel sampling framework called Steerable Conditional Diffusion.
This framework adapts the denoising network specifically to the available measured data.
We achieve substantial enhancements in OOD performance across diverse imaging modalities.
arXiv Detail & Related papers (2023-08-28T08:47:06Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - LINe: Out-of-Distribution Detection by Leveraging Important Neurons [15.797257361788812]
We introduce a new aspect for analyzing the difference in model outputs between in-distribution data and OOD data.
We propose a novel method, Leveraging Important Neurons (LINe), for post-hoc Out of distribution detection.
arXiv Detail & Related papers (2023-03-24T13:49:05Z) - Augmenting Softmax Information for Selective Classification with
Out-of-Distribution Data [7.221206118679026]
We show that existing post-hoc methods perform quite differently compared to when evaluated only on OOD detection.
We propose a novel method for SCOD, Softmax Information Retaining Combination (SIRC), that augments softmax-based confidence scores with feature-agnostic information.
Experiments on a wide variety of ImageNet-scale datasets and convolutional neural network architectures show that SIRC is able to consistently match or outperform the baseline for SCOD.
arXiv Detail & Related papers (2022-07-15T14:39:57Z) - RODD: A Self-Supervised Approach for Robust Out-of-Distribution
Detection [12.341250124228859]
We propose a simple yet effective generalized OOD detection method independent of out-of-distribution datasets.
Our approach relies on self-supervised feature learning of the training samples, where the embeddings lie on a compact low-dimensional space.
We empirically show that a pre-trained model with self-supervised contrastive learning yields a better model for uni-dimensional feature learning in the latent space.
arXiv Detail & Related papers (2022-04-06T03:05:58Z) - Detecting Out-of-distribution Samples via Variational Auto-encoder with
Reliable Uncertainty Estimation [5.430048915427229]
Variational autoencoders (VAEs) are influential generative models with rich representation capabilities.
VAE models have a weakness that assign a higher likelihood to out-of-distribution (OOD) inputs than in-distribution (ID) inputs.
In this study, we propose an improved noise contrastive prior (INCP) to be able to integrate into the encoder of VAEs, called INCPVAE.
arXiv Detail & Related papers (2020-07-16T06:02:18Z) - NADS: Neural Architecture Distribution Search for Uncertainty Awareness [79.18710225716791]
Machine learning (ML) systems often encounter Out-of-Distribution (OoD) errors when dealing with testing data coming from a distribution different from training data.
Existing OoD detection approaches are prone to errors and even sometimes assign higher likelihoods to OoD samples.
We propose Neural Architecture Distribution Search (NADS) to identify common building blocks among all uncertainty-aware architectures.
arXiv Detail & Related papers (2020-06-11T17:39:07Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.