Data augmentation and pre-trained networks for extremely low data
regimes unsupervised visual inspection
- URL: http://arxiv.org/abs/2106.01277v1
- Date: Wed, 2 Jun 2021 16:37:20 GMT
- Title: Data augmentation and pre-trained networks for extremely low data
regimes unsupervised visual inspection
- Authors: Pierre Gutierrez, Antoine Cordier, Tha\"is Caldeira, Th\'eophile
Sautory
- Abstract summary: We compare three approaches based on deep pre-trained features when varying the quantity of available data in MVTec AD dataset.
We show that although these methods are mostly robust to small sample sizes, they still can benefit greatly from using data augmentation in the original image space.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The use of deep features coming from pre-trained neural networks for
unsupervised anomaly detection purposes has recently gathered momentum in the
computer vision field. In particular, industrial inspection applications can
take advantage of such features, as demonstrated by the multiple successes of
related methods on the MVTec Anomaly Detection (MVTec AD) dataset. These
methods make use of neural networks pre-trained on auxiliary classification
tasks such as ImageNet. However, to our knowledge, no comparative study of
robustness to the low data regimes between these approaches has been conducted
yet. For quality inspection applications, the handling of limited sample sizes
may be crucial as large quantities of images are not available for small
series. In this work, we aim to compare three approaches based on deep
pre-trained features when varying the quantity of available data in MVTec AD:
KNN, Mahalanobis, and PaDiM. We show that although these methods are mostly
robust to small sample sizes, they still can benefit greatly from using data
augmentation in the original image space, which allows to deal with very small
production runs.
Related papers
- Few-shot Online Anomaly Detection and Segmentation [29.693357653538474]
This paper focuses on addressing the challenging yet practical few-shot online anomaly detection and segmentation (FOADS) task.
Under the FOADS framework, models are trained on a few-shot normal dataset, followed by inspection and improvement of their capabilities by leveraging unlabeled streaming data containing both normal and abnormal samples simultaneously.
In order to achieve improved performance with limited training samples, we employ multi-scale feature embedding extracted from a CNN pre-trained on ImageNet to obtain a robust representation.
arXiv Detail & Related papers (2024-03-27T02:24:00Z) - Rethinking Transformers Pre-training for Multi-Spectral Satellite
Imagery [78.43828998065071]
Recent advances in unsupervised learning have demonstrated the ability of large vision models to achieve promising results on downstream tasks.
Such pre-training techniques have also been explored recently in the remote sensing domain due to the availability of large amount of unlabelled data.
In this paper, we re-visit transformers pre-training and leverage multi-scale information that is effectively utilized with multiple modalities.
arXiv Detail & Related papers (2024-03-08T16:18:04Z) - Evaluating the Label Efficiency of Contrastive Self-Supervised Learning
for Multi-Resolution Satellite Imagery [0.0]
Self-supervised learning has been applied in the remote sensing domain to exploit readily-available unlabeled data.
In this paper, we study self-supervised visual representation learning through the lens of label efficiency.
arXiv Detail & Related papers (2022-10-13T06:54:13Z) - FuNNscope: Visual microscope for interactively exploring the loss
landscape of fully connected neural networks [77.34726150561087]
We show how to explore high-dimensional landscape characteristics of neural networks.
We generalize observations on small neural networks to more complex systems.
An interactive dashboard opens up a number of possible application networks.
arXiv Detail & Related papers (2022-04-09T16:41:53Z) - Hybridization of Capsule and LSTM Networks for unsupervised anomaly
detection on multivariate data [0.0]
This paper introduces a novel NN architecture which hybridises the Long-Short-Term-Memory (LSTM) and Capsule Networks into a single network.
The proposed method uses an unsupervised learning technique to overcome the issues with finding large volumes of labelled training data.
arXiv Detail & Related papers (2022-02-11T10:33:53Z) - Anomaly Detection in Image Datasets Using Convolutional Neural Networks,
Center Loss, and Mahalanobis Distance [0.0]
User activities generate a significant number of poor-quality or irrelevant images and data vectors.
For neural networks, the anomalous is usually defined as out-of-distribution samples.
This work proposes methods for supervised and semi-supervised detection of out-of-distribution samples in image datasets.
arXiv Detail & Related papers (2021-04-13T13:44:03Z) - Few-shot Weakly-Supervised Object Detection via Directional Statistics [55.97230224399744]
We propose a probabilistic multiple instance learning approach for few-shot Common Object Localization (COL) and few-shot Weakly Supervised Object Detection (WSOD)
Our model simultaneously learns the distribution of the novel objects and localizes them via expectation-maximization steps.
Our experiments show that the proposed method, despite being simple, outperforms strong baselines in few-shot COL and WSOD, as well as large-scale WSOD tasks.
arXiv Detail & Related papers (2021-03-25T22:34:16Z) - Distance-Based Anomaly Detection for Industrial Surfaces Using Triplet
Networks [2.7173993697663086]
Surface anomaly detection plays an important quality control role in many manufacturing industries to reduce scrap production.
Deep learning Convolutional Neural Networks (CNNs) have been at the forefront of these image processing-based solutions.
In this paper, we address that challenge by training the CNN on surface texture patches with a distance-based anomaly detection objective.
arXiv Detail & Related papers (2020-11-09T00:35:21Z) - Semi-supervised deep learning based on label propagation in a 2D
embedded space [117.9296191012968]
Proposed solutions propagate labels from a small set of supervised images to a large set of unsupervised ones to train a deep neural network model.
We present a loop in which a deep neural network (VGG-16) is trained from a set with more correctly labeled samples along iterations.
As the labeled set improves along iterations, it improves the features of the neural network.
arXiv Detail & Related papers (2020-08-02T20:08:54Z) - On Robustness and Transferability of Convolutional Neural Networks [147.71743081671508]
Modern deep convolutional networks (CNNs) are often criticized for not generalizing under distributional shifts.
We study the interplay between out-of-distribution and transfer performance of modern image classification CNNs for the first time.
We find that increasing both the training set and model sizes significantly improve the distributional shift robustness.
arXiv Detail & Related papers (2020-07-16T18:39:04Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.