BOOD: Boundary-based Out-Of-Distribution Data Generation
- URL: http://arxiv.org/abs/2508.00350v1
- Date: Fri, 01 Aug 2025 06:34:27 GMT
- Title: BOOD: Boundary-based Out-Of-Distribution Data Generation
- Authors: Qilin Liao, Shuo Yang, Bo Zhao, Ping Luo, Hengshuang Zhao,
- Abstract summary: This paper proposes a novel framework called Boundary-based Out-Of-Distribution data generation (BOOD)<n>BOOD synthesizes high-quality OOD features and generates human-compatible outlier images using diffusion models.
- Score: 47.47757472521503
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Harnessing the power of diffusion models to synthesize auxiliary training data based on latent space features has proven effective in enhancing out-of-distribution (OOD) detection performance. However, extracting effective features outside the in-distribution (ID) boundary in latent space remains challenging due to the difficulty of identifying decision boundaries between classes. This paper proposes a novel framework called Boundary-based Out-Of-Distribution data generation (BOOD), which synthesizes high-quality OOD features and generates human-compatible outlier images using diffusion models. BOOD first learns a text-conditioned latent feature space from the ID dataset, selects ID features closest to the decision boundary, and perturbs them to cross the decision boundary to form OOD features. These synthetic OOD features are then decoded into images in pixel space by a diffusion model. Compared to previous works, BOOD provides a more training efficient strategy for synthesizing informative OOD features, facilitating clearer distinctions between ID and OOD data. Extensive experimental results on common benchmarks demonstrate that BOOD surpasses the state-of-the-art method significantly, achieving a 29.64% decrease in average FPR95 (40.31% vs. 10.67%) and a 7.27% improvement in average AUROC (90.15% vs. 97.42%) on the CIFAR-100 dataset.
Related papers
- Adversarially Robust Out-of-Distribution Detection Using Lyapunov-Stabilized Embeddings [1.0260351016050424]
AROS is a novel approach leveraging neural ordinary differential equations (NODEs) with Lyapunov stability theorem.<n>By a tailored loss function, we apply Lyapunov stability theory to ensure that both in-distribution (ID) and OOD data converge to stable equilibrium points.<n>This approach encourages any perturbed input to return to its stable equilibrium, thereby enhancing the model's robustness against adversarial perturbations.
arXiv Detail & Related papers (2024-10-14T17:22:12Z) - Look Around and Find Out: OOD Detection with Relative Angles [24.369626931550794]
We propose a novel angle-based metric for OOD detection that is computed relative to the in-distribution structure.
Our method achieves state-of-the-art performance on CIFAR-10 and ImageNet benchmarks, reducing FPR95 by 0.88% and 7.74% respectively.
arXiv Detail & Related papers (2024-10-06T15:36:07Z) - Can OOD Object Detectors Learn from Foundation Models? [56.03404530594071]
Out-of-distribution (OOD) object detection is a challenging task due to the absence of open-set OOD data.
Inspired by recent advancements in text-to-image generative models, we study the potential of generative models trained on large-scale open-set data to synthesize OOD samples.
We introduce SyncOOD, a simple data curation method that capitalizes on the capabilities of large foundation models.
arXiv Detail & Related papers (2024-09-08T17:28:22Z) - Diffusion based Semantic Outlier Generation via Nuisance Awareness for Out-of-Distribution Detection [9.936136347796413]
Out-of-distribution (OOD) detection has recently shown promising results through training with synthetic OOD datasets.<n>We propose a novel framework, Semantic Outlier generation via Nuisance Awareness (SONA), which notably produces challenging outliers.<n>Our approach incorporates SONA guidance, providing separate control over semantic and nuisance regions of ID samples.
arXiv Detail & Related papers (2024-08-27T07:52:44Z) - Enhancing OOD Detection Using Latent Diffusion [5.093257685701887]
Out-of-Distribution (OOD) detection algorithms have been developed to identify unknown samples or objects in real-world deployments.<n>We propose an Outlier Aware Learning framework, which synthesizes OOD training data in the latent space.<n>We also develop a knowledge distillation module to prevent the degradation of ID classification accuracy when training with OOD data.
arXiv Detail & Related papers (2024-06-24T11:01:43Z) - Fast Decision Boundary based Out-of-Distribution Detector [7.04686607977352]
Out-of-Distribution (OOD) detection is essential for the safe deployment of AI systems.
Existing feature space methods, while effective, often incur significant computational overhead.
We propose a computationally-efficient OOD detector without using auxiliary models.
arXiv Detail & Related papers (2023-12-15T19:50:32Z) - Model-free Test Time Adaptation for Out-Of-Distribution Detection [62.49795078366206]
We propose a Non-Parametric Test Time textbfAdaptation framework for textbfDistribution textbfDetection (abbr)
abbr utilizes online test samples for model adaptation during testing, enhancing adaptability to changing data distributions.
We demonstrate the effectiveness of abbr through comprehensive experiments on multiple OOD detection benchmarks.
arXiv Detail & Related papers (2023-11-28T02:00:47Z) - Out-of-distribution Detection with Implicit Outlier Transformation [72.73711947366377]
Outlier exposure (OE) is powerful in out-of-distribution (OOD) detection.
We propose a novel OE-based approach that makes the model perform well for unseen OOD situations.
arXiv Detail & Related papers (2023-03-09T04:36:38Z) - A knee cannot have lung disease: out-of-distribution detection with
in-distribution voting using the medical example of chest X-ray
classification [58.720142291102135]
The study employed the commonly used chest X-ray classification model, CheXnet, trained on the chest X-ray 14 data set.
To detect OOD data for multi-label classification, we proposed in-distribution voting (IDV)
The proposed IDV approach trained on ID (chest X-ray 14) and OOD data (IRMA and ImageNet) achieved, on average, 0.999 OOD detection AUC across the three data sets.
arXiv Detail & Related papers (2022-08-01T18:20:36Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.