FewSOME: One-Class Few Shot Anomaly Detection with Siamese Networks
- URL: http://arxiv.org/abs/2301.06957v4
- Date: Mon, 12 Jun 2023 22:23:55 GMT
- Title: FewSOME: One-Class Few Shot Anomaly Detection with Siamese Networks
- Authors: Niamh Belton, Misgina Tsighe Hagos, Aonghus Lawlor, Kathleen M. Curran
- Abstract summary: 'Few Shot anOMaly detection' (FewSOME) is a deep One-Class Anomaly Detection algorithm with the ability to accurately detect anomalies.
FewSOME is aided by pretrained weights with an architecture based on Siamese Networks.
Our experiments demonstrate FewSOME performs at state-of-the-art level on benchmark datasets.
- Score: 0.5735035463793008
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent Anomaly Detection techniques have progressed the field considerably
but at the cost of increasingly complex training pipelines. Such techniques
require large amounts of training data, resulting in computationally expensive
algorithms that are unsuitable for settings where only a small amount of normal
samples are available for training. We propose 'Few Shot anOMaly detection'
(FewSOME), a deep One-Class Anomaly Detection algorithm with the ability to
accurately detect anomalies having trained on 'few' examples of the normal
class and no examples of the anomalous class. We describe FewSOME to be of low
complexity given its low data requirement and short training time. FewSOME is
aided by pretrained weights with an architecture based on Siamese Networks. By
means of an ablation study, we demonstrate how our proposed loss, 'Stop Loss',
improves the robustness of FewSOME. Our experiments demonstrate that FewSOME
performs at state-of-the-art level on benchmark datasets MNIST, CIFAR-10,
F-MNIST and MVTec AD while training on only 30 normal samples, a minute
fraction of the data that existing methods are trained on. Moreover, our
experiments show FewSOME to be robust to contaminated datasets. We also report
F1 score and balanced accuracy in addition to AUC as a benchmark for future
techniques to be compared against. Code available;
https://github.com/niamhbelton/FewSOME.
Related papers
- Just How Flexible are Neural Networks in Practice? [89.80474583606242]
It is widely believed that a neural network can fit a training set containing at least as many samples as it has parameters.
In practice, however, we only find solutions via our training procedure, including the gradient and regularizers, limiting flexibility.
arXiv Detail & Related papers (2024-06-17T12:24:45Z) - Class Imbalance in Object Detection: An Experimental Diagnosis and Study
of Mitigation Strategies [0.5439020425818999]
This study introduces a benchmarking framework utilizing the YOLOv5 single-stage detector to address the problem of foreground-foreground class imbalance.
We scrutinized three established techniques: sampling, loss weighing, and data augmentation.
Our comparative analysis reveals that sampling and loss reweighing methods, while shown to be beneficial in two-stage detector settings, do not translate as effectively in improving YOLOv5's performance.
arXiv Detail & Related papers (2024-03-11T19:06:04Z) - Understanding and Mitigating the Label Noise in Pre-training on
Downstream Tasks [91.15120211190519]
This paper aims to understand the nature of noise in pre-training datasets and to mitigate its impact on downstream tasks.
We propose a light-weight black-box tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise.
arXiv Detail & Related papers (2023-09-29T06:18:15Z) - Learning from Data with Noisy Labels Using Temporal Self-Ensemble [11.245833546360386]
Deep neural networks (DNNs) have an enormous capacity to memorize noisy labels.
Current state-of-the-art methods present a co-training scheme that trains dual networks using samples associated with small losses.
We propose a simple yet effective robust training scheme that operates by training only a single network.
arXiv Detail & Related papers (2022-07-21T08:16:31Z) - Distributed Adversarial Training to Robustify Deep Neural Networks at
Scale [100.19539096465101]
Current deep neural networks (DNNs) are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification.
To defend against such attacks, an effective approach, known as adversarial training (AT), has been shown to mitigate robust training.
We propose a large-batch adversarial training framework implemented over multiple machines.
arXiv Detail & Related papers (2022-06-13T15:39:43Z) - CMW-Net: Learning a Class-Aware Sample Weighting Mapping for Robust Deep
Learning [55.733193075728096]
Modern deep neural networks can easily overfit to biased training data containing corrupted labels or class imbalance.
Sample re-weighting methods are popularly used to alleviate this data bias issue.
We propose a meta-model capable of adaptively learning an explicit weighting scheme directly from data.
arXiv Detail & Related papers (2022-02-11T13:49:51Z) - Delving into Sample Loss Curve to Embrace Noisy and Imbalanced Data [17.7825114228313]
Corrupted labels and class imbalance are commonly encountered in practically collected training data.
Existing approaches alleviate these issues by adopting a sample re-weighting strategy.
However, biased samples with corrupted labels and of tailed classes commonly co-exist in training data.
arXiv Detail & Related papers (2021-12-30T09:20:07Z) - Deep Learning on a Data Diet: Finding Important Examples Early in
Training [35.746302913918484]
In vision datasets, simple scores can be used to identify important examples very early in training.
We propose two such scores -- the Gradient Normed (GraNd) and the Error L2-Norm (EL2N)
arXiv Detail & Related papers (2021-07-15T02:12:20Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Identifying Training Stop Point with Noisy Labeled Data [0.0]
We develop an algorithm to find a training stop point (TSP) at or close to test accuracy (MOTA)
We validated the robustness of our algorithm (AutoTSP) through several experiments on CIFAR-10, CIFAR-100, and a real-world noisy dataset.
arXiv Detail & Related papers (2020-12-24T20:07:30Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.