Few-shot Deep Representation Learning based on Information Bottleneck
Principle
- URL: http://arxiv.org/abs/2111.12950v1
- Date: Thu, 25 Nov 2021 07:15:12 GMT
- Title: Few-shot Deep Representation Learning based on Information Bottleneck
Principle
- Authors: Shin Ando
- Abstract summary: In a standard anomaly detection problem, a detection model is trained in an unsupervised setting, under an assumption that the samples were generated from a single source of normal data.
In practice, normal data often consist of multiple classes. In such settings, learning to differentiate between normal instances and anomalies among discrepancies between normal classes without large-scale labeled data presents a significant challenge.
In this work, we attempt to overcome this challenge by preparing few examples from each normal class, which is not excessively costly.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In a standard anomaly detection problem, a detection model is trained in an
unsupervised setting, under an assumption that the samples were generated from
a single source of normal data. In practice, however, normal data often consist
of multiple classes. In such settings, learning to differentiate between normal
instances and anomalies among discrepancies between normal classes without
large-scale labeled data presents a significant challenge. In this work, we
attempt to overcome this challenge by preparing few examples from each normal
class, which is not excessively costly. The above setting can also be described
as a few-shot learning for multiple, normal classes, with the goal of learning
a useful representation for anomaly detection. In order to utilize the limited
labeled examples in training, we integrate the inter-class distances among the
labeled examples in the deep feature space into the MAP loss. We derive their
relations from an information-theoretic principle. Our empirical study shows
that the proposed model improves the segmentation of normal classes in the deep
feature space which contributes to identifying the anomaly class examples.
Related papers
- Fine-grained Abnormality Prompt Learning for Zero-shot Anomaly Detection [88.34095233600719]
FAPrompt is a novel framework designed to learn Fine-grained Abnormality Prompts for more accurate ZSAD.
It substantially outperforms state-of-the-art methods by at least 3%-5% AUC/AP in both image- and pixel-level ZSAD tasks.
arXiv Detail & Related papers (2024-10-14T08:41:31Z) - On The Relationship between Visual Anomaly-free and Anomalous Representations [0.0]
Anomaly Detection is an important problem within computer vision, having variety of real-life applications.
In this paper, we make an important hypothesis and show, by exhaustive experimentation, that the space of anomaly-free visual patterns of the normal samples correlates well with each of the various spaces of anomalous patterns of the class-specific anomaly samples.
arXiv Detail & Related papers (2024-10-09T06:18:53Z) - Reconstruction-based Multi-Normal Prototypes Learning for Weakly Supervised Anomaly Detection [9.4765288592895]
Anomaly detection is a crucial task in various domains.
Most of the existing methods assume the normal sample data clusters around a single central prototype.
We propose a reconstruction-based multi-normal prototypes learning framework.
arXiv Detail & Related papers (2024-08-23T18:27:58Z) - Anomaly Detection by Context Contrasting [57.695202846009714]
Anomaly detection focuses on identifying samples that deviate from the norm.
Recent advances in self-supervised learning have shown great promise in this regard.
We propose Con$$, which learns through context augmentations.
arXiv Detail & Related papers (2024-05-29T07:59:06Z) - Few-shot Anomaly Detection in Text with Deviation Learning [13.957106119614213]
We introduce FATE, a framework that learns anomaly scores explicitly in an end-to-end method using deviation learning.
Our model is optimized to learn the distinct behavior of anomalies by utilizing a multi-head self-attention layer and multiple instance learning approaches.
arXiv Detail & Related papers (2023-08-22T20:40:21Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - Deep Visual Anomaly detection with Negative Learning [18.79849041106952]
In this paper, we propose anomaly detection with negative learning (ADNL), which employs the negative learning concept for the enhancement of anomaly detection.
The idea is to limit the reconstruction capability of a generative model using the given a small amount of anomaly examples.
This way, the network not only learns to reconstruct normal data but also encloses the normal distribution far from the possible distribution of anomalies.
arXiv Detail & Related papers (2021-05-24T01:48:44Z) - Toward Deep Supervised Anomaly Detection: Reinforcement Learning from
Partially Labeled Anomaly Data [150.9270911031327]
We consider the problem of anomaly detection with a small set of partially labeled anomaly examples and a large-scale unlabeled dataset.
Existing related methods either exclusively fit the limited anomaly examples that typically do not span the entire set of anomalies, or proceed with unsupervised learning from the unlabeled data.
We propose here instead a deep reinforcement learning-based approach that enables an end-to-end optimization of the detection of both labeled and unlabeled anomalies.
arXiv Detail & Related papers (2020-09-15T03:05:39Z) - Deep Weakly-supervised Anomaly Detection [118.55172352231381]
Pairwise Relation prediction Network (PReNet) learns pairwise relation features and anomaly scores.
PReNet can detect any seen/unseen abnormalities that fit the learned pairwise abnormal patterns.
Empirical results on 12 real-world datasets show that PReNet significantly outperforms nine competing methods in detecting seen and unseen anomalies.
arXiv Detail & Related papers (2019-10-30T00:40:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.