Fascinating Supervisory Signals and Where to Find Them: Deep Anomaly
Detection with Scale Learning
- URL: http://arxiv.org/abs/2305.16114v1
- Date: Thu, 25 May 2023 14:48:00 GMT
- Title: Fascinating Supervisory Signals and Where to Find Them: Deep Anomaly
Detection with Scale Learning
- Authors: Hongzuo Xu and Yijie Wang and Juhui Wei and Songlei Jian and Yizhou Li
and Ning Liu
- Abstract summary: We devise novel data-driven supervision for data by introducing a characteristic -- scale -- as data labels.
Scales serve as labels attached to transformed representations, thus offering ample labeled data for neural network training.
This paper further proposes a scale learning-based anomaly detection method.
- Score: 11.245813423781415
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Due to the unsupervised nature of anomaly detection, the key to fueling deep
models is finding supervisory signals. Different from current
reconstruction-guided generative models and transformation-based contrastive
models, we devise novel data-driven supervision for tabular data by introducing
a characteristic -- scale -- as data labels. By representing varied sub-vectors
of data instances, we define scale as the relationship between the
dimensionality of original sub-vectors and that of representations. Scales
serve as labels attached to transformed representations, thus offering ample
labeled data for neural network training. This paper further proposes a scale
learning-based anomaly detection method. Supervised by the learning objective
of scale distribution alignment, our approach learns the ranking of
representations converted from varied subspaces of each data instance. Through
this proxy task, our approach models inherent regularities and patterns within
data, which well describes data "normality". Abnormal degrees of testing
instances are obtained by measuring whether they fit these learned patterns.
Extensive experiments show that our approach leads to significant improvement
over state-of-the-art generative/contrastive anomaly detection methods.
Related papers
- Downstream-Pretext Domain Knowledge Traceback for Active Learning [138.02530777915362]
We propose a downstream-pretext domain knowledge traceback (DOKT) method that traces the data interactions of downstream knowledge and pre-training guidance.
DOKT consists of a traceback diversity indicator and a domain-based uncertainty estimator.
Experiments conducted on ten datasets show that our model outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2024-07-20T01:34:13Z) - Exploring Beyond Logits: Hierarchical Dynamic Labeling Based on Embeddings for Semi-Supervised Classification [49.09505771145326]
We propose a Hierarchical Dynamic Labeling (HDL) algorithm that does not depend on model predictions and utilizes image embeddings to generate sample labels.
Our approach has the potential to change the paradigm of pseudo-label generation in semi-supervised learning.
arXiv Detail & Related papers (2024-04-26T06:00:27Z) - ProtoVAE: Prototypical Networks for Unsupervised Disentanglement [1.6114012813668934]
We introduce a novel deep generative VAE-based model, ProtoVAE, that leverages a deep metric learning Prototypical network trained using self-supervision.
Our model is completely unsupervised and requires no priori knowledge of the dataset, including the number of factors.
We evaluate our proposed model on the benchmark dSprites, 3DShapes, and MPI3D disentanglement datasets.
arXiv Detail & Related papers (2023-05-16T01:29:26Z) - Metric Distribution to Vector: Constructing Data Representation via
Broad-Scale Discrepancies [15.40538348604094]
We present a novel embedding strategy named $mathbfMetricDistribution2vec$ to extract distribution characteristics into the vectorial representation for each data.
We demonstrate the application and effectiveness of our representation method in the supervised prediction tasks on extensive real-world structural graph datasets.
arXiv Detail & Related papers (2022-10-02T03:18:30Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - Gradient-Based Adversarial and Out-of-Distribution Detection [15.510581400494207]
We introduce confounding labels in gradient generation to probe the effective expressivity of neural networks.
We show that our gradient-based approach allows for capturing the anomaly in inputs based on the effective expressivity of the models.
arXiv Detail & Related papers (2022-06-16T15:50:41Z) - Weakly Supervised Change Detection Using Guided Anisotropic Difusion [97.43170678509478]
We propose original ideas that help us to leverage such datasets in the context of change detection.
First, we propose the guided anisotropic diffusion (GAD) algorithm, which improves semantic segmentation results.
We then show its potential in two weakly-supervised learning strategies tailored for change detection.
arXiv Detail & Related papers (2021-12-31T10:03:47Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Backpropagated Gradient Representations for Anomaly Detection [19.191613437266184]
Anomalies require more drastic model updates to fully represent them compared to normal data.
We show that the proposed method using gradient-based representations achieves state-of-the-art anomaly detection performance in benchmark image recognition datasets.
arXiv Detail & Related papers (2020-07-18T19:39:42Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z) - Correlation-aware Deep Generative Model for Unsupervised Anomaly
Detection [9.578395294627057]
Unsupervised anomaly detection aims to identify anomalous samples from highly complex and unstructured data.
We propose a method of Correlation aware unsupervised Anomaly detection via Deep Gaussian Mixture Model (CADGMM)
Experiments on real-world datasets demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2020-02-18T03:32:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.