PANDA: Adapting Pretrained Features for Anomaly Detection and
Segmentation
- URL: http://arxiv.org/abs/2010.05903v3
- Date: Thu, 12 Aug 2021 16:53:29 GMT
- Title: PANDA: Adapting Pretrained Features for Anomaly Detection and
Segmentation
- Authors: Tal Reiss, Niv Cohen, Liron Bergman and Yedid Hoshen
- Abstract summary: We show that combining pretrained features with simple anomaly detection and segmentation methods convincingly outperforms state-of-the-art methods.
In order to obtain further performance gains, we adapt pretrained features to the target distribution.
- Score: 34.98371632913735
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Anomaly detection methods require high-quality features. In recent years, the
anomaly detection community has attempted to obtain better features using
advances in deep self-supervised feature learning. Surprisingly, a very
promising direction, using pretrained deep features, has been mostly
overlooked. In this paper, we first empirically establish the perhaps expected,
but unreported result, that combining pretrained features with simple anomaly
detection and segmentation methods convincingly outperforms, much more complex,
state-of-the-art methods.
In order to obtain further performance gains in anomaly detection, we adapt
pretrained features to the target distribution. Although transfer learning
methods are well established in multi-class classification problems, the
one-class classification (OCC) setting is not as well explored. It turns out
that naive adaptation methods, which typically work well in supervised
learning, often result in catastrophic collapse (feature deterioration) and
reduce performance in OCC settings. A popular OCC method, DeepSVDD, advocates
using specialized architectures, but this limits the adaptation performance
gain. We propose two methods for combating collapse: i) a variant of early
stopping that dynamically learns the stopping iteration ii) elastic
regularization inspired by continual learning. Our method, PANDA, outperforms
the state-of-the-art in the OCC, outlier exposure and anomaly segmentation
settings by large margins.
Related papers
- Adaptive Label Smoothing for Out-of-Distribution Detection [1.5999407512883508]
We propose a novel regularization method called adaptive label smoothing (ALS)
ALS pushes the non-true classes to have same probabilities whereas the maximal probability is neither fixed nor limited.
Our code will be available to the public.
arXiv Detail & Related papers (2024-10-08T15:35:11Z) - Anti-Collapse Loss for Deep Metric Learning Based on Coding Rate Metric [99.19559537966538]
DML aims to learn a discriminative high-dimensional embedding space for downstream tasks like classification, clustering, and retrieval.
To maintain the structure of embedding space and avoid feature collapse, we propose a novel loss function called Anti-Collapse Loss.
Comprehensive experiments on benchmark datasets demonstrate that our proposed method outperforms existing state-of-the-art methods.
arXiv Detail & Related papers (2024-07-03T13:44:20Z) - Adaptive Rentention & Correction for Continual Learning [114.5656325514408]
A common problem in continual learning is the classification layer's bias towards the most recent task.
We name our approach Adaptive Retention & Correction (ARC)
ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets.
arXiv Detail & Related papers (2024-05-23T08:43:09Z) - Adaptive Variance Thresholding: A Novel Approach to Improve Existing
Deep Transfer Vision Models and Advance Automatic Knee-Joint Osteoarthritis
Classification [0.11249583407496219]
Knee-Joint Osteoarthritis (KOA) is a prevalent cause of global disability and inherently complex to diagnose.
One promising classification avenue involves applying deep learning methods.
This study proposes a novel paradigm for improving post-training specialized classifiers.
arXiv Detail & Related papers (2023-11-10T00:17:07Z) - FeCAM: Exploiting the Heterogeneity of Class Distributions in
Exemplar-Free Continual Learning [21.088762527081883]
Exemplar-free class-incremental learning (CIL) poses several challenges since it prohibits the rehearsal of data from previous tasks.
Recent approaches to incrementally learning the classifier by freezing the feature extractor after the first task have gained much attention.
We explore prototypical networks for CIL, which generate new class prototypes using the frozen feature extractor and classify the features based on the Euclidean distance to the prototypes.
arXiv Detail & Related papers (2023-09-25T11:54:33Z) - Robust Semi-Supervised Anomaly Detection via Adversarially Learned
Continuous Noise Corruption [11.135527192198092]
Anomaly detection is the task of recognising novel samples which deviate significantly from pre-establishednormality.
Deep Autoencoders (AE) have been widely used foranomaly detection tasks, but suffer from overfitting to a null identity function.
We introduce an efficient methodof producing Adversarially Learned Continuous Noise (ALCN) to maximally globally corrupt the input prior to denoising.
arXiv Detail & Related papers (2023-03-02T22:59:20Z) - Efficient Few-Shot Object Detection via Knowledge Inheritance [62.36414544915032]
Few-shot object detection (FSOD) aims at learning a generic detector that can adapt to unseen tasks with scarce training samples.
We present an efficient pretrain-transfer framework (PTF) baseline with no computational increment.
We also propose an adaptive length re-scaling (ALR) strategy to alleviate the vector length inconsistency between the predicted novel weights and the pretrained base weights.
arXiv Detail & Related papers (2022-03-23T06:24:31Z) - Few-shot Action Recognition with Prototype-centered Attentive Learning [88.10852114988829]
Prototype-centered Attentive Learning (PAL) model composed of two novel components.
First, a prototype-centered contrastive learning loss is introduced to complement the conventional query-centered learning objective.
Second, PAL integrates a attentive hybrid learning mechanism that can minimize the negative impacts of outliers.
arXiv Detail & Related papers (2021-01-20T11:48:12Z) - Incremental Learning from Low-labelled Stream Data in Open-Set Video
Face Recognition [0.0]
We propose a novel incremental learning approach which combines a deep features encoder with an Open-Set Dynamic Ensembles of SVM.
Our method can use unsupervised operational data to enhance recognition.
Results show a benefit of up to 15% F1-score increase respect to non-adaptive state-of-the-art methods.
arXiv Detail & Related papers (2020-12-17T13:28:13Z) - Solving Long-tailed Recognition with Deep Realistic Taxonomic Classifier [68.38233199030908]
Long-tail recognition tackles the natural non-uniformly distributed data in realworld scenarios.
While moderns perform well on populated classes, its performance degrades significantly on tail classes.
Deep-RTC is proposed as a new solution to the long-tail problem, combining realism with hierarchical predictions.
arXiv Detail & Related papers (2020-07-20T05:57:42Z) - Simple and Effective Prevention of Mode Collapse in Deep One-Class
Classification [93.2334223970488]
We propose two regularizers to prevent hypersphere collapse in deep SVDD.
The first regularizer is based on injecting random noise via the standard cross-entropy loss.
The second regularizer penalizes the minibatch variance when it becomes too small.
arXiv Detail & Related papers (2020-01-24T03:44:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.