Diffusion Denoising Process for Perceptron Bias in Out-of-distribution
Detection
- URL: http://arxiv.org/abs/2211.11255v2
- Date: Sun, 4 Jun 2023 02:06:29 GMT
- Title: Diffusion Denoising Process for Perceptron Bias in Out-of-distribution
Detection
- Authors: Luping Liu and Yi Ren and Xize Cheng and Rongjie Huang and Chongxuan
Li and Zhou Zhao
- Abstract summary: We introduce a new perceptron bias assumption that suggests discriminator models are more sensitive to certain features of the input, leading to the overconfidence problem.
We demonstrate that the diffusion denoising process (DDP) of DMs serves as a novel form of asymmetric, which is well-suited to enhance the input and mitigate the overconfidence problem.
Our experiments on CIFAR10, CIFAR100, and ImageNet show that our method outperforms SOTA approaches.
- Score: 67.49587673594276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Out-of-distribution (OOD) detection is a crucial task for ensuring the
reliability and safety of deep learning. Currently, discriminator models
outperform other methods in this regard. However, the feature extraction
process used by discriminator models suffers from the loss of critical
information, leaving room for bad cases and malicious attacks. In this paper,
we introduce a new perceptron bias assumption that suggests discriminator
models are more sensitive to certain features of the input, leading to the
overconfidence problem. To address this issue, we propose a novel framework
that combines discriminator and generation models and integrates diffusion
models (DMs) into OOD detection. We demonstrate that the diffusion denoising
process (DDP) of DMs serves as a novel form of asymmetric interpolation, which
is well-suited to enhance the input and mitigate the overconfidence problem.
The discriminator model features of OOD data exhibit sharp changes under DDP,
and we utilize the norm of this change as the indicator score. Our experiments
on CIFAR10, CIFAR100, and ImageNet show that our method outperforms SOTA
approaches. Notably, for the challenging InD ImageNet and OOD species datasets,
our method achieves an AUROC of 85.7, surpassing the previous SOTA method's
score of 77.4. Our implementation is available at
\url{https://github.com/luping-liu/DiffOOD}.
Related papers
- DSDE: Using Proportion Estimation to Improve Model Selection for Out-of-Distribution Detection [15.238164468992148]
Experimental results on CIFAR10 and CIFAR100 demonstrate the effectiveness of our approach in tackling OoD detection challenges.
We name the proposed approach as DOS-Storey-based Detector Ensemble (DSDE)
arXiv Detail & Related papers (2024-11-03T09:01:36Z) - Generative Edge Detection with Stable Diffusion [52.870631376660924]
Edge detection is typically viewed as a pixel-level classification problem mainly addressed by discriminative methods.
We propose a novel approach, named Generative Edge Detector (GED), by fully utilizing the potential of the pre-trained stable diffusion model.
We conduct extensive experiments on multiple datasets and achieve competitive performance.
arXiv Detail & Related papers (2024-10-04T01:52:23Z) - Enhancing Training Data Attribution for Large Language Models with Fitting Error Consideration [74.09687562334682]
We introduce a novel training data attribution method called Debias and Denoise Attribution (DDA)
Our method significantly outperforms existing approaches, achieving an averaged AUC of 91.64%.
DDA exhibits strong generality and scalability across various sources and different-scale models like LLaMA2, QWEN2, and Mistral.
arXiv Detail & Related papers (2024-10-02T07:14:26Z) - Thinking Racial Bias in Fair Forgery Detection: Models, Datasets and Evaluations [63.52709761339949]
We first contribute a dedicated dataset called the Fair Forgery Detection (FairFD) dataset, where we prove the racial bias of public state-of-the-art (SOTA) methods.
We design novel metrics including Approach Averaged Metric and Utility Regularized Metric, which can avoid deceptive results.
We also present an effective and robust post-processing technique, Bias Pruning with Fair Activations (BPFA), which improves fairness without requiring retraining or weight updates.
arXiv Detail & Related papers (2024-07-19T14:53:18Z) - Lazy Layers to Make Fine-Tuned Diffusion Models More Traceable [70.77600345240867]
A novel arbitrary-in-arbitrary-out (AIAO) strategy makes watermarks resilient to fine-tuning-based removal.
Unlike the existing methods of designing a backdoor for the input/output space of diffusion models, in our method, we propose to embed the backdoor into the feature space of sampled subpaths.
Our empirical studies on the MS-COCO, AFHQ, LSUN, CUB-200, and DreamBooth datasets confirm the robustness of AIAO.
arXiv Detail & Related papers (2024-05-01T12:03:39Z) - Mitigating Exposure Bias in Discriminator Guided Diffusion Models [4.5349436061325425]
We propose SEDM-G++, which incorporates a modified sampling approach, combining Discriminator Guidance and Epsilon Scaling.
Our proposed approach outperforms the current state-of-the-art, by achieving an FID score of 1.73 on the unconditional CIFAR-10 dataset.
arXiv Detail & Related papers (2023-11-18T20:49:50Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - Augmenting Softmax Information for Selective Classification with
Out-of-Distribution Data [7.221206118679026]
We show that existing post-hoc methods perform quite differently compared to when evaluated only on OOD detection.
We propose a novel method for SCOD, Softmax Information Retaining Combination (SIRC), that augments softmax-based confidence scores with feature-agnostic information.
Experiments on a wide variety of ImageNet-scale datasets and convolutional neural network architectures show that SIRC is able to consistently match or outperform the baseline for SCOD.
arXiv Detail & Related papers (2022-07-15T14:39:57Z) - Detecting Out-of-distribution Samples via Variational Auto-encoder with
Reliable Uncertainty Estimation [5.430048915427229]
Variational autoencoders (VAEs) are influential generative models with rich representation capabilities.
VAE models have a weakness that assign a higher likelihood to out-of-distribution (OOD) inputs than in-distribution (ID) inputs.
In this study, we propose an improved noise contrastive prior (INCP) to be able to integrate into the encoder of VAEs, called INCPVAE.
arXiv Detail & Related papers (2020-07-16T06:02:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.