ToCoAD: Two-Stage Contrastive Learning for Industrial Anomaly Detection
- URL: http://arxiv.org/abs/2407.01312v1
- Date: Mon, 1 Jul 2024 14:19:36 GMT
- Title: ToCoAD: Two-Stage Contrastive Learning for Industrial Anomaly Detection
- Authors: Yun Liang, Zhiguang Hu, Junjie Huang, Donglin Di, Anyang Su, Lei Fan,
- Abstract summary: This paper presents a two-stage training strategy, called textbfToCoAD.
In the first stage, a discriminative network is trained by using synthetic anomalies in a self-supervised learning manner.
This network is then utilized in the second stage to provide a negative feature guide, aiding in the training of the feature extractor through bootstrap contrastive learning.
- Score: 10.241033980055695
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current unsupervised anomaly detection approaches perform well on public datasets but struggle with specific anomaly types due to the domain gap between pre-trained feature extractors and target-specific domains. To tackle this issue, this paper presents a two-stage training strategy, called \textbf{ToCoAD}. In the first stage, a discriminative network is trained by using synthetic anomalies in a self-supervised learning manner. This network is then utilized in the second stage to provide a negative feature guide, aiding in the training of the feature extractor through bootstrap contrastive learning. This approach enables the model to progressively learn the distribution of anomalies specific to industrial datasets, effectively enhancing its generalizability to various types of anomalies. Extensive experiments are conducted to demonstrate the effectiveness of our proposed two-stage training strategy, and our model produces competitive performance, achieving pixel-level AUROC scores of 98.21\%, 98.43\% and 97.70\% on MVTec AD, VisA and BTAD respectively.
Related papers
- Test Time Training for Industrial Anomaly Segmentation [15.973768095014906]
Anomaly Detection and Ranging (AD&S) is crucial for industrial quality control.
This paper proposes a test time training strategy to improve the segmentation performance.
We demonstrate the effectiveness of our approach over baselines through extensive experimentation and evaluation on MVTec AD and MVTec 3D-AD.
arXiv Detail & Related papers (2024-04-04T18:31:24Z) - Self-supervised Feature Adaptation for 3D Industrial Anomaly Detection [59.41026558455904]
We focus on multi-modal anomaly detection. Specifically, we investigate early multi-modal approaches that attempted to utilize models pre-trained on large-scale visual datasets.
We propose a Local-to-global Self-supervised Feature Adaptation (LSFA) method to finetune the adaptors and learn task-oriented representation toward anomaly detection.
arXiv Detail & Related papers (2024-01-06T07:30:41Z) - CL-Flow:Strengthening the Normalizing Flows by Contrastive Learning for
Better Anomaly Detection [1.951082473090397]
We propose a self-supervised anomaly detection approach that combines contrastive learning with 2D-Flow.
Compared to mainstream unsupervised approaches, our self-supervised method demonstrates superior detection accuracy, fewer additional model parameters, and faster inference speed.
Our approach showcases new state-of-the-art results, achieving a performance of 99.6% in image-level AUROC on the MVTecAD dataset and 96.8% in image-level AUROC on the BTAD dataset.
arXiv Detail & Related papers (2023-11-12T10:07:03Z) - SALUDA: Surface-based Automotive Lidar Unsupervised Domain Adaptation [62.889835139583965]
We introduce an unsupervised auxiliary task of learning an implicit underlying surface representation simultaneously on source and target data.
As both domains share the same latent representation, the model is forced to accommodate discrepancies between the two sources of data.
Our experiments demonstrate that our method achieves a better performance than the current state of the art, both in real-to-real and synthetic-to-real scenarios.
arXiv Detail & Related papers (2023-04-06T17:36:23Z) - TWINS: A Fine-Tuning Framework for Improved Transferability of
Adversarial Robustness and Generalization [89.54947228958494]
This paper focuses on the fine-tuning of an adversarially pre-trained model in various classification tasks.
We propose a novel statistics-based approach, Two-WIng NormliSation (TWINS) fine-tuning framework.
TWINS is shown to be effective on a wide range of image classification datasets in terms of both generalization and robustness.
arXiv Detail & Related papers (2023-03-20T14:12:55Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - Domain Adaptation with Adversarial Training on Penultimate Activations [82.9977759320565]
Enhancing model prediction confidence on unlabeled target data is an important objective in Unsupervised Domain Adaptation (UDA)
We show that this strategy is more efficient and better correlated with the objective of boosting prediction confidence than adversarial training on input images or intermediate features.
arXiv Detail & Related papers (2022-08-26T19:50:46Z) - Anomaly Detection in Cybersecurity: Unsupervised, Graph-Based and
Supervised Learning Methods in Adversarial Environments [63.942632088208505]
Inherent to today's operating environment is the practice of adversarial machine learning.
In this work, we examine the feasibility of unsupervised learning and graph-based methods for anomaly detection.
We incorporate a realistic adversarial training mechanism when training our supervised models to enable strong classification performance in adversarial environments.
arXiv Detail & Related papers (2021-05-14T10:05:10Z) - Deep Stable Learning for Out-Of-Distribution Generalization [27.437046504902938]
Approaches based on deep neural networks have achieved striking performance when testing data and training data share similar distribution.
Eliminating the impact of distribution shifts between training and testing data is crucial for building performance-promising deep models.
We propose to address this problem by removing the dependencies between features via learning weights for training samples.
arXiv Detail & Related papers (2021-04-16T03:54:21Z) - Regularization with Latent Space Virtual Adversarial Training [4.874780144224057]
Virtual Adversarial Training (VAT) has shown impressive results among recently developed regularization methods.
We propose LVAT, which injects perturbation in the latent space instead of the input space.
LVAT can generate adversarial samples flexibly, resulting in more adverse effects and thus more effective regularization.
arXiv Detail & Related papers (2020-11-26T08:51:38Z) - Old is Gold: Redefining the Adversarially Learned One-Class Classifier
Training Paradigm [15.898383112569237]
A popular method for anomaly detection is to use the generator of an adversarial network to formulate anomaly scores.
We propose a framework that effectively generates stable results across a wide range of training steps.
Our model achieves a frame-level AUC of 98.1%, surpassing recent state-of-the-art methods.
arXiv Detail & Related papers (2020-04-16T13:48:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.