DCAC: Dynamic Class-Aware Cache Creates Stronger Out-of-Distribution Detectors
- URL: http://arxiv.org/abs/2601.12468v1
- Date: Sun, 18 Jan 2026 16:16:31 GMT
- Title: DCAC: Dynamic Class-Aware Cache Creates Stronger Out-of-Distribution Detectors
- Authors: Yanqi Wu, Qichao Chen, Runhe Lai, Xinhua Lu, Jia-Xin Zhuang, Zhilin Zhao, Wei-Shi Zheng, Ruixuan Wang,
- Abstract summary: Out-of-distribution (OOD) detection remains a fundamental challenge for deep neural networks.<n>We propose DCAC (Dynamic Class-Aware Cache), a training-free, test-time calibration module that maintains separate caches for each ID class to collect high-entropy samples.
- Score: 43.8920190045364
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Out-of-distribution (OOD) detection remains a fundamental challenge for deep neural networks, particularly due to overconfident predictions on unseen OOD samples during testing. We reveal a key insight: OOD samples predicted as the same class, or given high probabilities for it, are visually more similar to each other than to the true in-distribution (ID) samples. Motivated by this class-specific observation, we propose DCAC (Dynamic Class-Aware Cache), a training-free, test-time calibration module that maintains separate caches for each ID class to collect high-entropy samples and calibrate the raw predictions of input samples. DCAC leverages cached visual features and predicted probabilities through a lightweight two-layer module to mitigate overconfident predictions on OOD samples. This module can be seamlessly integrated with various existing OOD detection methods across both unimodal and vision-language models while introducing minimal computational overhead. Extensive experiments on multiple OOD benchmarks demonstrate that DCAC significantly enhances existing methods, achieving substantial improvements, i.e., reducing FPR95 by 6.55% when integrated with ASH-S on ImageNet OOD benchmark.
Related papers
- Predictive Sample Assignment for Semantically Coherent Out-of-Distribution Detection [62.1052001316508]
Semantically coherent out-of-distribution detection (SCOOD) is a recently proposed realistic OOD detection setting.<n>We propose a concise SCOOD framework based on predictive sample assignment (PSA)<n>Our approach outperforms the state-of-the-art methods by a significant margin.
arXiv Detail & Related papers (2025-12-15T01:18:38Z) - GOOD: Training-Free Guided Diffusion Sampling for Out-of-Distribution Detection [61.96025941146103]
GOOD is a novel framework that guides sampling trajectories towards OOD regions using off-the-shelf in-distribution (ID) classifiers.<n> GOOD incorporates dual-level guidance: Image-level guidance based on the gradient of log partition to reduce input likelihood, drives samples toward low-density regions in pixel space.<n>We introduce a unified OOD score that adaptively combines image and feature discrepancies, enhancing detection robustness.
arXiv Detail & Related papers (2025-10-20T03:58:46Z) - Towards More Trustworthy Deep Code Models by Enabling Out-of-Distribution Detection [12.141246816152288]
We develop two types of SE-specific OOD detection models, unsupervised and weakly-supervised OOD detection for code.<n>Our proposed methods significantly outperform the baselines in detecting OOD samples from four different scenarios simultaneously and also positively impact a main code understanding task.
arXiv Detail & Related papers (2025-02-26T06:59:53Z) - DPU: Dynamic Prototype Updating for Multimodal Out-of-Distribution Detection [10.834698906236405]
Out-of-distribution (OOD) detection is essential for ensuring the robustness of machine learning models.
Recent advances in multimodal models have demonstrated the potential of leveraging multiple modalities to enhance detection performance.
We propose Dynamic Prototype Updating (DPU), a novel plug-and-play framework for multimodal OOD detection.
arXiv Detail & Related papers (2024-11-12T22:43:16Z) - A Mixture of Exemplars Approach for Efficient Out-of-Distribution Detection with Foundation Models [0.0]
This paper presents an efficient approach to tackling OOD detection that is designed to maximise the benefit of training a backbone with a high quality, frozen, pretrained foundation model.<n>MoLAR provides strong OOD detection performance when only comparing the similarity of OOD examples to the exemplars, a small set of images chosen to be representative of the dataset.
arXiv Detail & Related papers (2023-11-28T06:12:28Z) - Pseudo-OOD training for robust language models [78.15712542481859]
OOD detection is a key component of a reliable machine-learning model for any industry-scale application.
We propose POORE - POsthoc pseudo-Ood REgularization, that generates pseudo-OOD samples using in-distribution (IND) data.
We extensively evaluate our framework on three real-world dialogue systems, achieving new state-of-the-art in OOD detection.
arXiv Detail & Related papers (2022-10-17T14:32:02Z) - Towards Robust Visual Question Answering: Making the Most of Biased
Samples via Contrastive Learning [54.61762276179205]
We propose a novel contrastive learning approach, MMBS, for building robust VQA models by Making the Most of Biased Samples.
Specifically, we construct positive samples for contrastive learning by eliminating the information related to spurious correlation from the original training samples.
We validate our contributions by achieving competitive performance on the OOD dataset VQA-CP v2 while preserving robust performance on the ID dataset VQA v2.
arXiv Detail & Related papers (2022-10-10T11:05:21Z) - WOOD: Wasserstein-based Out-of-Distribution Detection [6.163329453024915]
Training data for deep-neural-network-based classifiers are usually assumed to be sampled from the same distribution.
When part of the test samples are drawn from a distribution that is far away from that of the training samples, the trained neural network has a tendency to make high confidence predictions for these OOD samples.
We propose a Wasserstein-based out-of-distribution detection (WOOD) method to overcome these challenges.
arXiv Detail & Related papers (2021-12-13T02:35:15Z) - Towards Consistent Predictive Confidence through Fitted Ensembles [6.371992222487036]
This paper introduces separable concept learning framework to measure the performance of classifiers in presence of OOD examples.
We present a new strong baseline for more consistent predictive confidence in deep models, called fitted ensembles.
arXiv Detail & Related papers (2021-06-22T21:32:31Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.