Catalyst: Out-of-Distribution Detection via Elastic Scaling
- URL: http://arxiv.org/abs/2602.02409v1
- Date: Mon, 02 Feb 2026 18:08:33 GMT
- Title: Catalyst: Out-of-Distribution Detection via Elastic Scaling
- Authors: Abid Hassan, Tuan Ngo, Saad Shafiq, Nenad Medvidovic,
- Abstract summary: Out-of-distribution (OOD) detection is critical for the safe deployment of deep neural networks.<n>State-of-the-art post-hoc methods typically derive OOD scores from the output logits or penultimate feature vector obtained via global average pooling (GAP)<n>We introduce Catalyst, a post-hoc framework that exploits these under-explored signals.
- Score: 7.883652498475041
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Out-of-distribution (OOD) detection is critical for the safe deployment of deep neural networks. State-of-the-art post-hoc methods typically derive OOD scores from the output logits or penultimate feature vector obtained via global average pooling (GAP). We contend that this exclusive reliance on the logit or feature vector discards a rich, complementary signal: the raw channel-wise statistics of the pre-pooling feature map lost in GAP. In this paper, we introduce Catalyst, a post-hoc framework that exploits these under-explored signals. Catalyst computes an input-dependent scaling factor ($γ$) on-the-fly from these raw statistics (e.g., mean, standard deviation, and maximum activation). This $γ$ is then fused with the existing baseline score, multiplicatively modulating it -- an ``elastic scaling'' -- to push the ID and OOD distributions further apart. We demonstrate Catalyst is a generalizable framework: it seamlessly integrates with logit-based methods (e.g., Energy, ReAct, SCALE) and also provides a significant boost to distance-based detectors like KNN. As a result, Catalyst achieves substantial and consistent performance gains, reducing the average False Positive Rate by 32.87 on CIFAR-10 (ResNet-18), 27.94% on CIFAR-100 (ResNet-18), and 22.25% on ImageNet (ResNet-50). Our results highlight the untapped potential of pre-pooling statistics and demonstrate that Catalyst is complementary to existing OOD detection approaches.
Related papers
- ODAR: Principled Adaptive Routing for LLM Reasoning via Active Inference [60.958331943869126]
ODAR-Expert is an adaptive routing framework that optimize the accuracy-efficiency trade-off via principled resource allocation.<n>We show strong and consistent gains, including 98.2% accuracy on MATH and 54.8% on Humanity's Last Exam.
arXiv Detail & Related papers (2026-02-27T05:22:01Z) - Halt the Hallucination: Decoupling Signal and Semantic OOD Detection Based on Cascaded Early Rejection [7.227431306238601]
We propose the Cascaded Early Rejection (CER) framework, which realizes hierarchical filtering for anomaly detection via a coarse-to-fine logic.<n> Experimental results demonstrate that CER not only reduces computational overhead by 32% but also achieves a significant performance leap on the CIFAR-100 benchmark.
arXiv Detail & Related papers (2026-02-06T02:55:35Z) - DAVIS: OOD Detection via Dominant Activations and Variance for Increased Separation [7.883652498475041]
We introduce DAVIS, a simple and broadly applicable post-hoc technique that enriches feature by incorporating crucial statistics.<n>It achieves significant reductions in the false positive rate (FPR95), with improvements of 48.26% on CIFAR-10 using ResNet-18, 38.13% on CIFAR-100 using ResNet-34, and 26.83% on ImageNet-1k benchmarks using MobileNet-v2.
arXiv Detail & Related papers (2026-01-30T08:23:14Z) - AdaSCALE: Adaptive Scaling for OOD Detection [1.6921396880325779]
Out-of-distribution (OOD) detection methods leverage activation shaping to improve the separation between in-distribution (ID) and OOD inputs.<n>We propose textbfAdaSCALE, an adaptive scaling procedure that dynamically adjusts the percentile threshold based on a sample's estimated OOD likelihood.<n>Our approach achieves state-of-the-art OOD detection performance, outperforming the latest rival OptFS by 14.94% in near-OOD and 21.67% in far-OOD in average FPR@95 metric on the ImageNet-1k benchmark.
arXiv Detail & Related papers (2025-03-11T04:10:06Z) - End-to-End Convolutional Activation Anomaly Analysis for Anomaly Detection [41.94295877935867]
End-to-end Convolutional Activation Anomaly Analysis (E2E-CA$3$)
We propose an End-to-end Convolutional Activation Anomaly Analysis (E2E-CA$3$)
It is a significant extension of A$3$ anomaly detection approach proposed by Sperl, Schulze and B"ottinger.
arXiv Detail & Related papers (2024-11-21T10:22:50Z) - HAct: Out-of-Distribution Detection with Neural Net Activation
Histograms [7.795929277007233]
We propose a novel descriptor, HAct, for OOD detection, that is, probability distributions (approximated by histograms) of output values of neural network layers under the influence of incoming data.
We demonstrate that HAct is significantly more accurate than state-of-the-art in OOD detection on multiple image classification benchmarks.
arXiv Detail & Related papers (2023-09-09T16:22:18Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Adversarial Feature Augmentation and Normalization for Visual
Recognition [109.6834687220478]
Recent advances in computer vision take advantage of adversarial data augmentation to ameliorate the generalization ability of classification models.
Here, we present an effective and efficient alternative that advocates adversarial augmentation on intermediate feature embeddings.
We validate the proposed approach across diverse visual recognition tasks with representative backbone networks.
arXiv Detail & Related papers (2021-03-22T20:36:34Z) - CASTLE: Regularization via Auxiliary Causal Graph Discovery [89.74800176981842]
We introduce Causal Structure Learning (CASTLE) regularization and propose to regularize a neural network by jointly learning the causal relationships between variables.
CASTLE efficiently reconstructs only the features in the causal DAG that have a causal neighbor, whereas reconstruction-based regularizers suboptimally reconstruct all input features.
arXiv Detail & Related papers (2020-09-28T09:49:38Z) - SADet: Learning An Efficient and Accurate Pedestrian Detector [68.66857832440897]
This paper proposes a series of systematic optimization strategies for the detection pipeline of one-stage detector.
It forms a single shot anchor-based detector (SADet) for efficient and accurate pedestrian detection.
Though structurally simple, it presents state-of-the-art result and real-time speed of $20$ FPS for VGA-resolution images.
arXiv Detail & Related papers (2020-07-26T12:32:38Z) - Network Moments: Extensions and Sparse-Smooth Attacks [59.24080620535988]
We derive exact analytic expressions for the first and second moments of a small piecewise linear (PL) network (Affine, ReLU, Affine) subject to Gaussian input.
We show that the new variance expression can be efficiently approximated leading to much tighter variance estimates.
arXiv Detail & Related papers (2020-06-21T11:36:41Z) - Revisiting Initialization of Neural Networks [72.24615341588846]
We propose a rigorous estimation of the global curvature of weights across layers by approximating and controlling the norm of their Hessian matrix.
Our experiments on Word2Vec and the MNIST/CIFAR image classification tasks confirm that tracking the Hessian norm is a useful diagnostic tool.
arXiv Detail & Related papers (2020-04-20T18:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.