GOOD: Training-Free Guided Diffusion Sampling for Out-of-Distribution Detection
- URL: http://arxiv.org/abs/2510.17131v2
- Date: Mon, 27 Oct 2025 02:58:39 GMT
- Title: GOOD: Training-Free Guided Diffusion Sampling for Out-of-Distribution Detection
- Authors: Xin Gao, Jiyao Liu, Guanghao Li, Yueming Lyu, Jianxiong Gao, Weichen Yu, Ningsheng Xu, Liang Wang, Caifeng Shan, Ziwei Liu, Chenyang Si,
- Abstract summary: GOOD is a novel framework that guides sampling trajectories towards OOD regions using off-the-shelf in-distribution (ID) classifiers.<n> GOOD incorporates dual-level guidance: Image-level guidance based on the gradient of log partition to reduce input likelihood, drives samples toward low-density regions in pixel space.<n>We introduce a unified OOD score that adaptively combines image and feature discrepancies, enhancing detection robustness.
- Score: 61.96025941146103
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advancements have explored text-to-image diffusion models for synthesizing out-of-distribution (OOD) samples, substantially enhancing the performance of OOD detection. However, existing approaches typically rely on perturbing text-conditioned embeddings, resulting in semantic instability and insufficient shift diversity, which limit generalization to realistic OOD. To address these challenges, we propose GOOD, a novel and flexible framework that directly guides diffusion sampling trajectories towards OOD regions using off-the-shelf in-distribution (ID) classifiers. GOOD incorporates dual-level guidance: (1) Image-level guidance based on the gradient of log partition to reduce input likelihood, drives samples toward low-density regions in pixel space. (2) Feature-level guidance, derived from k-NN distance in the classifier's latent space, promotes sampling in feature-sparse regions. Hence, this dual-guidance design enables more controllable and diverse OOD sample generation. Additionally, we introduce a unified OOD score that adaptively combines image and feature discrepancies, enhancing detection robustness. We perform thorough quantitative and qualitative analyses to evaluate the effectiveness of GOOD, demonstrating that training with samples generated by GOOD can notably enhance OOD detection performance.
Related papers
- Learning to Explore: Policy-Guided Outlier Synthesis for Graph Out-of-Distribution Detection [51.93878677594561]
In unsupervised graph-level OOD detection, models are typically trained using only in-distribution (ID) data.<n>We propose a Policy-Guided Outlier Synthesis framework that replaces statics with a learned exploration strategy.
arXiv Detail & Related papers (2026-02-28T11:40:18Z) - Predictive Sample Assignment for Semantically Coherent Out-of-Distribution Detection [62.1052001316508]
Semantically coherent out-of-distribution detection (SCOOD) is a recently proposed realistic OOD detection setting.<n>We propose a concise SCOOD framework based on predictive sample assignment (PSA)<n>Our approach outperforms the state-of-the-art methods by a significant margin.
arXiv Detail & Related papers (2025-12-15T01:18:38Z) - Revisiting Logit Distributions for Reliable Out-of-Distribution Detection [73.9121001113687]
Out-of-distribution (OOD) detection is critical for ensuring the reliability of deep learning models in open-world applications.<n>LogitGap is a novel post-hoc OOD detection method that exploits the relationship between the maximum logit and the remaining logits.<n>We show that LogitGap consistently achieves state-of-the-art performance across diverse OOD detection scenarios and benchmarks.
arXiv Detail & Related papers (2025-10-23T02:16:45Z) - Pseudo-label Induced Subspace Representation Learning for Robust Out-of-Distribution Detection [6.5679810906772325]
We propose a novel OOD detection framework based on a pseudo-label-induced subspace representation.<n>In addition, we introduce a simple yet effective learning criterion that integrates a cross-entropy-based ID classification loss with a subspace distance-based regularization loss to enhance ID-OOD separability.
arXiv Detail & Related papers (2025-08-05T05:38:00Z) - Evidential Spectrum-Aware Contrastive Learning for OOD Detection in Dynamic Graphs [17.750640850821622]
Out-of-distribution (OOD) detection in dynamic graphs aims to identify whether incoming data deviates from the distribution of the in-distribution (ID) training set.<n>We propose EviSEC, an innovative and effective OOD detector via Evidential Spectrum-awarE Contrastive Learning.
arXiv Detail & Related papers (2025-06-09T04:34:46Z) - Can OOD Object Detectors Learn from Foundation Models? [56.03404530594071]
Out-of-distribution (OOD) object detection is a challenging task due to the absence of open-set OOD data.
Inspired by recent advancements in text-to-image generative models, we study the potential of generative models trained on large-scale open-set data to synthesize OOD samples.
We introduce SyncOOD, a simple data curation method that capitalizes on the capabilities of large foundation models.
arXiv Detail & Related papers (2024-09-08T17:28:22Z) - GROOD: GRadient-Aware Out-of-Distribution Detection [11.511906612904255]
Out-of-distribution (OOD) detection is crucial for ensuring the reliability of deep learning models in real-world applications.<n>We propose GRadient-aware Out-Of-Distribution detection (GROOD), a method that derives an OOD prototype from synthetic samples and computes class prototypes directly from In-distribution (ID) training data.
arXiv Detail & Related papers (2023-12-22T04:28:43Z) - Likelihood-Aware Semantic Alignment for Full-Spectrum
Out-of-Distribution Detection [24.145060992747077]
We propose a Likelihood-Aware Semantic Alignment (LSA) framework to promote the image-text correspondence into semantically high-likelihood regions.
Extensive experiments demonstrate the remarkable OOD detection performance of our proposed LSA, surpassing existing methods by a margin of $15.26%$ and $18.88%$ on two F-OOD benchmarks.
arXiv Detail & Related papers (2023-12-04T08:53:59Z) - Distilling the Unknown to Unveil Certainty [66.29929319664167]
Out-of-distribution (OOD) detection is critical for identifying test samples that deviate from in-distribution (ID) data, ensuring network robustness and reliability.<n>This paper presents a flexible framework for OOD knowledge distillation that extracts OOD-sensitive information from a network to develop a binary classifier capable of distinguishing between ID and OOD samples.
arXiv Detail & Related papers (2023-11-14T08:05:02Z) - Robustness to Spurious Correlations Improves Semantic
Out-of-Distribution Detection [24.821151013905865]
Methods which utilize the outputs or feature representations of predictive models have emerged as promising approaches for out-of-distribution (OOD) detection of image inputs.
We provide a possible explanation for SN-OOD detection failures and propose nuisance-aware OOD detection to address them.
arXiv Detail & Related papers (2023-02-08T15:28:33Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.