TASTE: Task-Aware Out-of-Distribution Detection via Stein Operators
- URL: http://arxiv.org/abs/2602.07640v1
- Date: Sat, 07 Feb 2026 17:48:08 GMT
- Title: TASTE: Task-Aware Out-of-Distribution Detection via Stein Operators
- Authors: MichaĆ Kozyra, Gesine Reinert,
- Abstract summary: TASTE (Task-Aware STEin operators) is a task-aware framework based on so-called Stein operators.<n>We show that the resulting operator admits a clear geometric interpretation as a projection of distribution shift onto the sensitivity field of the model.
- Score: 2.191505742658975
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Out-of-distribution detection methods are often either data-centric, detecting deviations from the training input distribution irrespective of their effect on a trained model, or model-centric, relying on classifier outputs without explicit reference to data geometry. We propose TASTE (Task-Aware STEin operators): a task-aware framework based on so-called Stein operators, which allows us to link distribution shift to the input sensitivity of the model. We show that the resulting operator admits a clear geometric interpretation as a projection of distribution shift onto the sensitivity field of the model, yielding theoretical guarantees. Beyond detecting the presence of a shift, the same construction enables its localisation through a coordinate-wise decomposition, and for image data-provides interpretable per-pixel diagnostics. Experiments on controlled Gaussian shifts, MNIST under geometric perturbations, and CIFAR-10 perturbed benchmarks demonstrate that the proposed method aligns closely with task degradation while outperforming established baselines.
Related papers
- Deceptive Risk Minimization: Out-of-Distribution Generalization by Deceiving Distribution Shift Detectors [13.676316138121395]
This paper proposes deception as a mechanism for out-of-distribution generalization.<n>By learning data representations that make training data appear independent and identically distributed to an observer, we can identify stable features that eliminate spurious correlations.
arXiv Detail & Related papers (2025-09-15T16:11:55Z) - A Self-Organizing Clustering System for Unsupervised Distribution Shift Detection [1.0436203990235575]
We propose a continual learning framework for monitoring and detecting distribution changes.
In particular, we investigate the projections made by two topology-preserving maps: the Self-Organizing Map and the Scale Invariant Map.
Our method can be applied in both a supervised and an unsupervised context.
arXiv Detail & Related papers (2024-04-25T14:48:29Z) - Invariant Anomaly Detection under Distribution Shifts: A Causal
Perspective [6.845698872290768]
Anomaly detection (AD) is the machine learning task of identifying highly discrepant abnormal samples.
Under the constraints of a distribution shift, the assumption that training samples and test samples are drawn from the same distribution breaks down.
We attempt to increase the resilience of anomaly detection models to different kinds of distribution shifts.
arXiv Detail & Related papers (2023-12-21T23:20:47Z) - Continual-MAE: Adaptive Distribution Masked Autoencoders for Continual Test-Time Adaptation [49.827306773992376]
Continual Test-Time Adaptation (CTTA) is proposed to migrate a source pre-trained model to continually changing target distributions.
Our proposed method attains state-of-the-art performance in both classification and segmentation CTTA tasks.
arXiv Detail & Related papers (2023-12-19T15:34:52Z) - Distributional Gaussian Processes Layers for Out-of-Distribution
Detection [18.05109901753853]
It is unsure whether out-of-distribution detection models reliant on deep neural networks are suitable for detecting domain shifts in medical imaging.
We propose a parameter efficient Bayesian layer for hierarchical convolutional Gaussian Processes that incorporates Gaussian Processes operating in Wasserstein-2 space.
Our uncertainty estimates result in out-of-distribution detection that outperforms the capabilities of previous Bayesian networks.
arXiv Detail & Related papers (2022-06-27T14:49:48Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - Certifying Model Accuracy under Distribution Shifts [151.67113334248464]
We present provable robustness guarantees on the accuracy of a model under bounded Wasserstein shifts of the data distribution.
We show that a simple procedure that randomizes the input of the model within a transformation space is provably robust to distributional shifts under the transformation.
arXiv Detail & Related papers (2022-01-28T22:03:50Z) - Training on Test Data with Bayesian Adaptation for Covariate Shift [96.3250517412545]
Deep neural networks often make inaccurate predictions with unreliable uncertainty estimates.
We derive a Bayesian model that provides for a well-defined relationship between unlabeled inputs under distributional shift and model parameters.
We show that our method improves both accuracy and uncertainty estimation.
arXiv Detail & Related papers (2021-09-27T01:09:08Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.