Latent-Insensitive Autoencoders for Anomaly Detection and
Class-Incremental Learning
- URL: http://arxiv.org/abs/2110.13101v1
- Date: Mon, 25 Oct 2021 16:53:49 GMT
- Title: Latent-Insensitive Autoencoders for Anomaly Detection and
Class-Incremental Learning
- Authors: Muhammad S. Battikh, Artem A. Lenskiy
- Abstract summary: We introduce Latent-Insensitive Autoencoder (LIS-AE) where unlabeled data from a similar domain is utilized as negative examples to shape the latent layer (bottleneck) of a regular autoencoder.
We treat class-incremental learning as multiple anomaly detection tasks by adding a different latent layer for each class and use other available classes in task as negative examples to shape each latent layer.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reconstruction-based approaches to anomaly detection tend to fall short when
applied to complex datasets with target classes that possess high inter-class
variance. Similar to the idea of self-taught learning used in transfer
learning, many domains are rich with \textit{similar} unlabeled datasets that
could be leveraged as a proxy for out-of-distribution samples. In this paper we
introduce Latent-Insensitive Autoencoder (LIS-AE) where unlabeled data from a
similar domain is utilized as negative examples to shape the latent layer
(bottleneck) of a regular autoencoder such that it is only capable of
reconstructing one task. Since the underlying goal of LIS-AE is to only
reconstruct in-distribution samples, this makes it naturally applicable in the
domain of class-incremental learning. We treat class-incremental learning as
multiple anomaly detection tasks by adding a different latent layer for each
class and use other available classes in task as negative examples to shape
each latent layer. We test our model in multiple anomaly detection and
class-incremental settings presenting quantitative and qualitative analysis
showcasing the accuracy and the flexibility of our model for both anomaly
detection and class-incremental learning.
Related papers
- Toward Multi-class Anomaly Detection: Exploring Class-aware Unified Model against Inter-class Interference [67.36605226797887]
We introduce a Multi-class Implicit Neural representation Transformer for unified Anomaly Detection (MINT-AD)
By learning the multi-class distributions, the model generates class-aware query embeddings for the transformer decoder.
MINT-AD can project category and position information into a feature embedding space, further supervised by classification and prior probability loss functions.
arXiv Detail & Related papers (2024-03-21T08:08:31Z) - mixed attention auto encoder for multi-class industrial anomaly
detection [2.8519768339207356]
We propose a unified mixed-attention auto encoder (MAAE) to implement multi-class anomaly detection with a single model.
To alleviate the performance degradation due to the diverse distribution patterns of different categories, we employ spatial attentions and channel attentions.
MAAE delivers remarkable performances on the benchmark dataset compared with the state-of-the-art methods.
arXiv Detail & Related papers (2023-09-22T08:17:48Z) - Generalization Bounds for Few-Shot Transfer Learning with Pretrained
Classifiers [26.844410679685424]
We study the ability of foundation models to learn representations for classification that are transferable to new, unseen classes.
We show that the few-shot error of the learned feature map on new classes is small in case of class-feature-variability collapse.
arXiv Detail & Related papers (2022-12-23T18:46:05Z) - Intra-class Adaptive Augmentation with Neighbor Correction for Deep
Metric Learning [99.14132861655223]
We propose a novel intra-class adaptive augmentation (IAA) framework for deep metric learning.
We reasonably estimate intra-class variations for every class and generate adaptive synthetic samples to support hard samples mining.
Our method significantly improves and outperforms the state-of-the-art methods on retrieval performances by 3%-6%.
arXiv Detail & Related papers (2022-11-29T14:52:38Z) - Evolving Multi-Label Fuzzy Classifier [5.53329677986653]
Multi-label classification has attracted much attention in the machine learning community to address the problem of assigning single samples to more than one class at the same time.
We propose an evolving multi-label fuzzy classifier (EFC-ML) which is able to self-adapt and self-evolve its structure with new incoming multi-label samples in an incremental, single-pass manner.
arXiv Detail & Related papers (2022-03-29T08:01:03Z) - Mitigating Generation Shifts for Generalized Zero-Shot Learning [52.98182124310114]
Generalized Zero-Shot Learning (GZSL) is the task of leveraging semantic information (e.g., attributes) to recognize the seen and unseen samples, where unseen classes are not observable during training.
We propose a novel Generation Shifts Mitigating Flow framework for learning unseen data synthesis efficiently and effectively.
Experimental results demonstrate that GSMFlow achieves state-of-the-art recognition performance in both conventional and generalized zero-shot settings.
arXiv Detail & Related papers (2021-07-07T11:43:59Z) - Zero-sample surface defect detection and classification based on
semantic feedback neural network [13.796631421521765]
We propose an Ensemble Co-training algorithm, which adaptively reduces the prediction error in image tag embedding from multiple angles.
Various experiments conducted on the zero-shot dataset and the cylinder liner dataset in the industrial field provide competitive results.
arXiv Detail & Related papers (2021-06-15T08:26:36Z) - Meta-learning One-class Classifiers with Eigenvalue Solvers for
Supervised Anomaly Detection [55.888835686183995]
We propose a neural network-based meta-learning method for supervised anomaly detection.
We experimentally demonstrate that the proposed method achieves better performance than existing anomaly detection and few-shot learning methods.
arXiv Detail & Related papers (2021-03-01T01:43:04Z) - Understanding Classifier Mistakes with Generative Models [88.20470690631372]
Deep neural networks are effective on supervised learning tasks, but have been shown to be brittle.
In this paper, we leverage generative models to identify and characterize instances where classifiers fail to generalize.
Our approach is agnostic to class labels from the training set which makes it applicable to models trained in a semi-supervised way.
arXiv Detail & Related papers (2020-10-05T22:13:21Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.