Subspace-Guided Feature Reconstruction for Unsupervised Anomaly
Localization
- URL: http://arxiv.org/abs/2309.13904v2
- Date: Wed, 28 Feb 2024 09:16:53 GMT
- Title: Subspace-Guided Feature Reconstruction for Unsupervised Anomaly
Localization
- Authors: Katsuya Hotta, Chao Zhang, Yoshihiro Hagihara, Takuya Akashi
- Abstract summary: Unsupervised anomaly localization plays a critical role in industrial manufacturing.
Most recent methods perform feature matching or reconstruction for the target sample with pre-trained deep neural networks.
We propose a novel subspace-guided feature reconstruction framework to pursue adaptive feature approximation for anomaly localization.
- Score: 5.085309164633571
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised anomaly localization, which plays a critical role in industrial
manufacturing, aims to identify anomalous regions that deviate from normal
sample patterns. Most recent methods perform feature matching or reconstruction
for the target sample with pre-trained deep neural networks. However, they
still struggle to address challenging anomalies because the deep embeddings
stored in the memory bank can be less powerful and informative. More
specifically, prior methods often overly rely on the finite resources stored in
the memory bank, which leads to low robustness to unseen targets. In this
paper, we propose a novel subspace-guided feature reconstruction framework to
pursue adaptive feature approximation for anomaly localization. It first learns
to construct low-dimensional subspaces from the given nominal samples, and then
learns to reconstruct the given deep target embedding by linearly combining the
subspace basis vectors using the self-expressive model. Our core is that,
despite the limited resources in the memory bank, the out-of-bank features can
be alternatively ``mimicked'' under the self-expressive mechanism to adaptively
model the target. Eventually, the poorly reconstructed feature dimensions
indicate anomalies for localization. Moreover, we propose a sampling method
that leverages the sparsity of subspaces and allows the feature reconstruction
to depend only on a small resource subset, which contributes to less memory
overhead. Extensive experiments on three industrial benchmark datasets
demonstrate that our approach generally achieves state-of-the-art anomaly
localization performance.
Related papers
- Anti-Collapse Loss for Deep Metric Learning Based on Coding Rate Metric [99.19559537966538]
DML aims to learn a discriminative high-dimensional embedding space for downstream tasks like classification, clustering, and retrieval.
To maintain the structure of embedding space and avoid feature collapse, we propose a novel loss function called Anti-Collapse Loss.
Comprehensive experiments on benchmark datasets demonstrate that our proposed method outperforms existing state-of-the-art methods.
arXiv Detail & Related papers (2024-07-03T13:44:20Z) - Neural Surface Reconstruction from Sparse Views Using Epipolar Geometry [4.659427498118277]
We present a novel approach, named EpiS, that incorporates Epipolar information into the reconstruction process.
Our method aggregates coarse information from the cost volume into Epipolar features extracted from multiple source views.
To address the information gaps in sparse conditions, we integrate depth information from monocular depth estimation using global and local regularization techniques.
arXiv Detail & Related papers (2024-06-06T17:47:48Z) - Reconstruction-Based Anomaly Localization via Knowledge-Informed
Self-Training [10.565214056914174]
knowledge-informed self-training (KIST) integrates knowledge into reconstruction model through self-training.
KIST utilizes weakly labeled anomalous samples in addition to the normal ones and exploits knowledge to yield pixel-level pseudo-labels of the anomalous samples.
arXiv Detail & Related papers (2024-02-22T03:15:13Z) - Small Object Detection via Coarse-to-fine Proposal Generation and
Imitation Learning [52.06176253457522]
We propose a two-stage framework tailored for small object detection based on the Coarse-to-fine pipeline and Feature Imitation learning.
CFINet achieves state-of-the-art performance on the large-scale small object detection benchmarks, SODA-D and SODA-A.
arXiv Detail & Related papers (2023-08-18T13:13:09Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - ARES: Locally Adaptive Reconstruction-based Anomaly Scoring [25.707159917988733]
We show that anomaly scoring function is not adaptive to the natural variation in reconstruction error across the range of normal samples.
We propose a novel Adaptive Reconstruction Error-based Scoring approach, which adapts its scoring based on the local behaviour of reconstruction error over the latent space.
arXiv Detail & Related papers (2022-06-15T15:35:12Z) - Toward Certified Robustness Against Real-World Distribution Shifts [65.66374339500025]
We train a generative model to learn perturbations from data and define specifications with respect to the output of the learned model.
A unique challenge arising from this setting is that existing verifiers cannot tightly approximate sigmoid activations.
We propose a general meta-algorithm for handling sigmoid activations which leverages classical notions of counter-example-guided abstraction refinement.
arXiv Detail & Related papers (2022-06-08T04:09:13Z) - Domain-Adjusted Regression or: ERM May Already Learn Features Sufficient
for Out-of-Distribution Generalization [52.7137956951533]
We argue that devising simpler methods for learning predictors on existing features is a promising direction for future research.
We introduce Domain-Adjusted Regression (DARE), a convex objective for learning a linear predictor that is provably robust under a new model of distribution shift.
Under a natural model, we prove that the DARE solution is the minimax-optimal predictor for a constrained set of test distributions.
arXiv Detail & Related papers (2022-02-14T16:42:16Z) - Regressive Domain Adaptation for Unsupervised Keypoint Detection [67.2950306888855]
Domain adaptation (DA) aims at transferring knowledge from a labeled source domain to an unlabeled target domain.
We present a method of regressive domain adaptation (RegDA) for unsupervised keypoint detection.
Our method brings large improvement by 8% to 11% in terms of PCK on different datasets.
arXiv Detail & Related papers (2021-03-10T16:45:22Z) - Robust Locality-Aware Regression for Labeled Data Classification [5.432221650286726]
We propose a new discriminant feature extraction framework, namely Robust Locality-Aware Regression (RLAR)
In our model, we introduce a retargeted regression to perform the marginal representation learning adaptively instead of using the general average inter-class margin.
To alleviate the disturbance of outliers and prevent overfitting, we measure the regression term and locality-aware term together with the regularization term by the L2,1 norm.
arXiv Detail & Related papers (2020-06-15T11:36:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.