Few-shot Deep Representation Learning based on Information Bottleneck
Principle
- URL: http://arxiv.org/abs/2111.12950v1
- Date: Thu, 25 Nov 2021 07:15:12 GMT
- Title: Few-shot Deep Representation Learning based on Information Bottleneck
Principle
- Authors: Shin Ando
- Abstract summary: In a standard anomaly detection problem, a detection model is trained in an unsupervised setting, under an assumption that the samples were generated from a single source of normal data.
In practice, normal data often consist of multiple classes. In such settings, learning to differentiate between normal instances and anomalies among discrepancies between normal classes without large-scale labeled data presents a significant challenge.
In this work, we attempt to overcome this challenge by preparing few examples from each normal class, which is not excessively costly.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In a standard anomaly detection problem, a detection model is trained in an
unsupervised setting, under an assumption that the samples were generated from
a single source of normal data. In practice, however, normal data often consist
of multiple classes. In such settings, learning to differentiate between normal
instances and anomalies among discrepancies between normal classes without
large-scale labeled data presents a significant challenge. In this work, we
attempt to overcome this challenge by preparing few examples from each normal
class, which is not excessively costly. The above setting can also be described
as a few-shot learning for multiple, normal classes, with the goal of learning
a useful representation for anomaly detection. In order to utilize the limited
labeled examples in training, we integrate the inter-class distances among the
labeled examples in the deep feature space into the MAP loss. We derive their
relations from an information-theoretic principle. Our empirical study shows
that the proposed model improves the segmentation of normal classes in the deep
feature space which contributes to identifying the anomaly class examples.
Related papers
- Anomaly Detection by Context Contrasting [57.695202846009714]
Anomaly Detection focuses on identifying samples that deviate from the norm.
Recent advances in self-supervised learning have shown great promise in this regard.
We propose Con2, which addresses this problem by setting normal training data into distinct contexts.
Our approach achieves state-of-the-art performance on various benchmarks while exhibiting superior performance in a more realistic healthcare setting.
arXiv Detail & Related papers (2024-05-29T07:59:06Z) - Toward Generalist Anomaly Detection via In-context Residual Learning with Few-shot Sample Prompts [25.629973843455495]
Generalist Anomaly Detection (GAD) aims to train one single detection model that can generalize to detect anomalies in diverse datasets from different application domains without further training on the target data.
We introduce a novel approach that learns an in-context residual learning model for GAD, termed InCTRL.
InCTRL is the best performer and significantly outperforms state-of-the-art competing methods.
arXiv Detail & Related papers (2024-03-11T08:07:46Z) - Few-shot Anomaly Detection in Text with Deviation Learning [13.957106119614213]
We introduce FATE, a framework that learns anomaly scores explicitly in an end-to-end method using deviation learning.
Our model is optimized to learn the distinct behavior of anomalies by utilizing a multi-head self-attention layer and multiple instance learning approaches.
arXiv Detail & Related papers (2023-08-22T20:40:21Z) - Unsupervised Deep One-Class Classification with Adaptive Threshold based
on Training Dynamics [11.047949973156836]
We propose an unsupervised deep one-class classification that learns normality from pseudo-labeled normal samples.
Experiments on 10 anomaly detection benchmarks show that our method effectively improves performance on anomaly detection by sizable margins.
arXiv Detail & Related papers (2023-02-13T01:51:34Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - Deep Visual Anomaly detection with Negative Learning [18.79849041106952]
In this paper, we propose anomaly detection with negative learning (ADNL), which employs the negative learning concept for the enhancement of anomaly detection.
The idea is to limit the reconstruction capability of a generative model using the given a small amount of anomaly examples.
This way, the network not only learns to reconstruct normal data but also encloses the normal distribution far from the possible distribution of anomalies.
arXiv Detail & Related papers (2021-05-24T01:48:44Z) - Toward Deep Supervised Anomaly Detection: Reinforcement Learning from
Partially Labeled Anomaly Data [150.9270911031327]
We consider the problem of anomaly detection with a small set of partially labeled anomaly examples and a large-scale unlabeled dataset.
Existing related methods either exclusively fit the limited anomaly examples that typically do not span the entire set of anomalies, or proceed with unsupervised learning from the unlabeled data.
We propose here instead a deep reinforcement learning-based approach that enables an end-to-end optimization of the detection of both labeled and unlabeled anomalies.
arXiv Detail & Related papers (2020-09-15T03:05:39Z) - A Background-Agnostic Framework with Adversarial Training for Abnormal
Event Detection in Video [120.18562044084678]
Abnormal event detection in video is a complex computer vision problem that has attracted significant attention in recent years.
We propose a background-agnostic framework that learns from training videos containing only normal events.
arXiv Detail & Related papers (2020-08-27T18:39:24Z) - Deep Weakly-supervised Anomaly Detection [118.55172352231381]
Pairwise Relation prediction Network (PReNet) learns pairwise relation features and anomaly scores.
PReNet can detect any seen/unseen abnormalities that fit the learned pairwise abnormal patterns.
Empirical results on 12 real-world datasets show that PReNet significantly outperforms nine competing methods in detecting seen and unseen anomalies.
arXiv Detail & Related papers (2019-10-30T00:40:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.