Likelihood-Aware Semantic Alignment for Full-Spectrum
Out-of-Distribution Detection
- URL: http://arxiv.org/abs/2312.01732v1
- Date: Mon, 4 Dec 2023 08:53:59 GMT
- Title: Likelihood-Aware Semantic Alignment for Full-Spectrum
Out-of-Distribution Detection
- Authors: Fan Lu, Kai Zhu, Kecheng Zheng, Wei Zhai, Yang Cao
- Abstract summary: We propose a Likelihood-Aware Semantic Alignment (LSA) framework to promote the image-text correspondence into semantically high-likelihood regions.
Extensive experiments demonstrate the remarkable OOD detection performance of our proposed LSA, surpassing existing methods by a margin of $15.26%$ and $18.88%$ on two F-OOD benchmarks.
- Score: 24.145060992747077
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Full-spectrum out-of-distribution (F-OOD) detection aims to accurately
recognize in-distribution (ID) samples while encountering semantic and
covariate shifts simultaneously. However, existing out-of-distribution (OOD)
detectors tend to overfit the covariance information and ignore intrinsic
semantic correlation, inadequate for adapting to complex domain
transformations. To address this issue, we propose a Likelihood-Aware Semantic
Alignment (LSA) framework to promote the image-text correspondence into
semantically high-likelihood regions. LSA consists of an offline Gaussian
sampling strategy which efficiently samples semantic-relevant visual embeddings
from the class-conditional Gaussian distribution, and a bidirectional prompt
customization mechanism that adjusts both ID-related and negative context for
discriminative ID/OOD boundary. Extensive experiments demonstrate the
remarkable OOD detection performance of our proposed LSA especially on the
intractable Near-OOD setting, surpassing existing methods by a margin of
$15.26\%$ and $18.88\%$ on two F-OOD benchmarks, respectively.
Related papers
- Resultant: Incremental Effectiveness on Likelihood for Unsupervised Out-of-Distribution Detection [63.93728560200819]
Unsupervised out-of-distribution (U-OOD) detection is to identify data samples with a detector trained solely on unlabeled in-distribution (ID) data.
Recent studies have developed various detectors based on DGMs to move beyond likelihood.
We apply two techniques for each direction, specifically post-hoc prior and dataset entropy-mutual calibration.
Experimental results demonstrate that the Resultant could be a new state-of-the-art U-OOD detector.
arXiv Detail & Related papers (2024-09-05T02:58:13Z) - Diffusion based Semantic Outlier Generation via Nuisance Awareness for Out-of-Distribution Detection [9.936136347796413]
Out-of-distribution (OOD) detection has recently shown promising results through training with synthetic OOD datasets.
We propose a novel framework, Semantic Outlier generation via Nuisance Awareness (SONA), which notably produces challenging outliers.
Our approach incorporates SONA guidance, providing separate control over semantic and nuisance regions of ID samples.
arXiv Detail & Related papers (2024-08-27T07:52:44Z) - Rethinking the Evaluation of Out-of-Distribution Detection: A Sorites Paradox [70.57120710151105]
Most existing out-of-distribution (OOD) detection benchmarks classify samples with novel labels as the OOD data.
Some marginal OOD samples actually have close semantic contents to the in-distribution (ID) sample, which makes determining the OOD sample a Sorites Paradox.
We construct a benchmark named Incremental Shift OOD (IS-OOD) to address the issue.
arXiv Detail & Related papers (2024-06-14T09:27:56Z) - Ambiguity-Resistant Semi-Supervised Learning for Dense Object Detection [98.66771688028426]
We propose a Ambiguity-Resistant Semi-supervised Learning (ARSL) for one-stage detectors.
Joint-Confidence Estimation (JCE) is proposed to quantifies the classification and localization quality of pseudo labels.
ARSL effectively mitigates the ambiguities and achieves state-of-the-art SSOD performance on MS COCO and PASCAL VOC.
arXiv Detail & Related papers (2023-03-27T07:46:58Z) - Robustness to Spurious Correlations Improves Semantic
Out-of-Distribution Detection [24.821151013905865]
Methods which utilize the outputs or feature representations of predictive models have emerged as promising approaches for out-of-distribution (OOD) detection of image inputs.
We provide a possible explanation for SN-OOD detection failures and propose nuisance-aware OOD detection to address them.
arXiv Detail & Related papers (2023-02-08T15:28:33Z) - Partial and Asymmetric Contrastive Learning for Out-of-Distribution
Detection in Long-Tailed Recognition [80.07843757970923]
We show that existing OOD detection methods suffer from significant performance degradation when the training set is long-tail distributed.
We propose Partial and Asymmetric Supervised Contrastive Learning (PASCL), which explicitly encourages the model to distinguish between tail-class in-distribution samples and OOD samples.
Our method outperforms previous state-of-the-art method by $1.29%$, $1.45%$, $0.69%$ anomaly detection false positive rate (FPR) and $3.24%$, $4.06%$, $7.89%$ in-distribution
arXiv Detail & Related papers (2022-07-04T01:53:07Z) - Supervision Adaptation Balancing In-distribution Generalization and
Out-of-distribution Detection [36.66825830101456]
In-distribution (ID) and out-of-distribution (OOD) samples can lead to textitdistributional vulnerability in deep neural networks.
We introduce a novel textitsupervision adaptation approach to generate adaptive supervision information for OOD samples, making them more compatible with ID samples.
arXiv Detail & Related papers (2022-06-19T11:16:44Z) - Full-Spectrum Out-of-Distribution Detection [42.98617540431124]
We take into account both shift types and introduce full-spectrum OOD (FS-OOD) detection.
We propose SEM, a simple feature-based semantics score function.
SEM significantly outperforms current state-of-the-art methods.
arXiv Detail & Related papers (2022-04-11T17:59:14Z) - Self-Supervised Anomaly Detection by Self-Distillation and Negative
Sampling [1.304892050913381]
We show that self-distillation of the in-distribution training set together with contrasting against negative examples strongly improves OOD detection.
We observe that by leveraging negative samples, which keep the statistics of low-level features while changing the high-level semantics, higher average detection performance is obtained.
arXiv Detail & Related papers (2022-01-17T12:33:14Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.