SubspaceAD: Training-Free Few-Shot Anomaly Detection via Subspace Modeling
- URL: http://arxiv.org/abs/2602.23013v1
- Date: Thu, 26 Feb 2026 13:52:57 GMT
- Title: SubspaceAD: Training-Free Few-Shot Anomaly Detection via Subspace Modeling
- Authors: Camile Lendering, Erkut Akdag, Egor Bondarev,
- Abstract summary: SubspaceAD is a training-free method for detecting visual anomalies in industrial images.<n>It works across one-shot and few-shot settings without training, prompt tuning, or memory banks.<n>It achieves image-level and pixel-level AUROC of 98.0% and 97.6% on the MVTec-AD dataset, and 93.3% and 98.3% on the VisA dataset.
- Score: 6.476948781728136
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Detecting visual anomalies in industrial inspection often requires training with only a few normal images per category. Recent few-shot methods achieve strong results employing foundation-model features, but typically rely on memory banks, auxiliary datasets, or multi-modal tuning of vision-language models. We therefore question whether such complexity is necessary given the feature representations of vision foundation models. To answer this question, we introduce SubspaceAD, a training-free method, that operates in two simple stages. First, patch-level features are extracted from a small set of normal images by a frozen DINOv2 backbone. Second, a Principal Component Analysis (PCA) model is fit to these features to estimate the low-dimensional subspace of normal variations. At inference, anomalies are detected via the reconstruction residual with respect to this subspace, producing interpretable and statistically grounded anomaly scores. Despite its simplicity, SubspaceAD achieves state-of-the-art performance across one-shot and few-shot settings without training, prompt tuning, or memory banks. In the one-shot anomaly detection setting, SubspaceAD achieves image-level and pixel-level AUROC of 98.0% and 97.6% on the MVTec-AD dataset, and 93.3% and 98.3% on the VisA dataset, respectively, surpassing prior state-of-the-art results. Code and demo are available at https://github.com/CLendering/SubspaceAD.
Related papers
- Training Free Zero-Shot Visual Anomaly Localization via Diffusion Inversion [15.486565360380203]
Zero-Shot image Anomaly Detection (ZSAD) aims to detect and localise anomalies without access to any normal training samples of the target data.<n>Recent approaches leverage additional modalities such as language to generate fine-grained prompts for localisation.<n>We introduce a training-free vision-only ZSAD framework that circumvents the need for fine-grained prompts.
arXiv Detail & Related papers (2026-01-12T21:55:31Z) - Foundation Visual Encoders Are Secretly Few-Shot Anomaly Detectors [58.75916798814376]
We develop a few-shot anomaly detector termed FoundAD.<n>We observe that the anomaly amount in an image directly correlates with the difference in the learnt embeddings.<n>The simple operator acts as an effective tool for anomaly detection to characterize and identify out-of-distribution regions in an image.
arXiv Detail & Related papers (2025-10-02T11:53:20Z) - Few-Shot Pattern Detection via Template Matching and Regression [52.79291493477272]
We propose a simple yet effective detector based on template matching and regression, dubbed TMR.<n>It effectively preserves and leverages the spatial layout of exemplars through a minimalistic structure with a small number of learnable convolutional or projection layers on top of a frozen backbone.<n>Our method outperforms the state-of-the-art methods on the three benchmarks, RPINE, FSCD-147, and FSCD-LVIS, and demonstrates strong generalization in cross-dataset evaluation.
arXiv Detail & Related papers (2025-08-25T03:52:42Z) - Self-supervised Feature Adaptation for 3D Industrial Anomaly Detection [59.41026558455904]
We focus on multi-modal anomaly detection. Specifically, we investigate early multi-modal approaches that attempted to utilize models pre-trained on large-scale visual datasets.
We propose a Local-to-global Self-supervised Feature Adaptation (LSFA) method to finetune the adaptors and learn task-oriented representation toward anomaly detection.
arXiv Detail & Related papers (2024-01-06T07:30:41Z) - DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - CRADL: Contrastive Representations for Unsupervised Anomaly Detection
and Localization [2.8659934481869715]
Unsupervised anomaly detection in medical imaging aims to detect and localize arbitrary anomalies without requiring anomalous data during training.
Most current state-of-the-art methods use latent variable generative models operating directly on the images.
We propose CRADL whose core idea is to model the distribution of normal samples directly in the low-dimensional representation space of an encoder trained with a contrastive pretext-task.
arXiv Detail & Related papers (2023-01-05T16:07:49Z) - Y-GAN: Learning Dual Data Representations for Efficient Anomaly
Detection [0.0]
We propose a novel reconstruction-based model for anomaly detection, called Y-GAN.
The model consists of a Y-shaped auto-encoder and represents images in two separate latent spaces.
arXiv Detail & Related papers (2021-09-28T20:17:04Z) - A Hierarchical Transformation-Discriminating Generative Model for Few
Shot Anomaly Detection [93.38607559281601]
We devise a hierarchical generative model that captures the multi-scale patch distribution of each training image.
The anomaly score is obtained by aggregating the patch-based votes of the correct transformation across scales and image regions.
arXiv Detail & Related papers (2021-04-29T17:49:48Z) - CutPaste: Self-Supervised Learning for Anomaly Detection and
Localization [59.719925639875036]
We propose a framework for building anomaly detectors using normal training data only.
We first learn self-supervised deep representations and then build a generative one-class classifier on learned representations.
Our empirical study on MVTec anomaly detection dataset demonstrates the proposed algorithm is general to be able to detect various types of real-world defects.
arXiv Detail & Related papers (2021-04-08T19:04:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.