Dictionary Learning with Uniform Sparse Representations for Anomaly
Detection
- URL: http://arxiv.org/abs/2201.03869v1
- Date: Tue, 11 Jan 2022 10:22:46 GMT
- Title: Dictionary Learning with Uniform Sparse Representations for Anomaly
Detection
- Authors: Paul Irofti, Cristian Rusu, Andrei P\u{a}tra\c{s}cu
- Abstract summary: We study how dictionary learning (DL) performs in detecting abnormal samples in a dataset of signals.
Numerical simulations show that one can efficiently use this resulted subspace to discriminate the anomalies over the regular data points.
- Score: 2.277447144331876
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many applications like audio and image processing show that sparse
representations are a powerful and efficient signal modeling technique. Finding
an optimal dictionary that generates at the same time the sparsest
representations of data and the smallest approximation error is a hard problem
approached by dictionary learning (DL). We study how DL performs in detecting
abnormal samples in a dataset of signals. In this paper we use a particular DL
formulation that seeks uniform sparse representations model to detect the
underlying subspace of the majority of samples in a dataset, using a K-SVD-type
algorithm. Numerical simulations show that one can efficiently use this
resulted subspace to discriminate the anomalies over the regular data points.
Related papers
- Downstream-Pretext Domain Knowledge Traceback for Active Learning [138.02530777915362]
We propose a downstream-pretext domain knowledge traceback (DOKT) method that traces the data interactions of downstream knowledge and pre-training guidance.
DOKT consists of a traceback diversity indicator and a domain-based uncertainty estimator.
Experiments conducted on ten datasets show that our model outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2024-07-20T01:34:13Z) - Anomalies, Representations, and Self-Supervision [0.0]
We develop a self-supervised method for density-based anomaly detection using contrastive learning, and test it using event-level anomaly data from CMS ADC 2021.
The AnomalyCLR technique is data-driven and uses augmentations of the background data to mimic non-Standard-Model events in a model-agnostic way.
arXiv Detail & Related papers (2023-01-11T19:00:00Z) - Improving the Robustness of Summarization Models by Detecting and
Removing Input Noise [50.27105057899601]
We present a large empirical study quantifying the sometimes severe loss in performance from different types of input noise for a range of datasets and model sizes.
We propose a light-weight method for detecting and removing such noise in the input during model inference without requiring any training, auxiliary models, or even prior knowledge of the type of noise.
arXiv Detail & Related papers (2022-12-20T00:33:11Z) - Deep Equilibrium Assisted Block Sparse Coding of Inter-dependent
Signals: Application to Hyperspectral Imaging [71.57324258813675]
A dataset of inter-dependent signals is defined as a matrix whose columns demonstrate strong dependencies.
A neural network is employed to act as structure prior and reveal the underlying signal interdependencies.
Deep unrolling and Deep equilibrium based algorithms are developed, forming highly interpretable and concise deep-learning-based architectures.
arXiv Detail & Related papers (2022-03-29T21:00:39Z) - Discriminative Dictionary Learning based on Statistical Methods [0.0]
Sparse Representation (SR) of signals or data has a well founded theory with rigorous mathematical error bounds and proofs.
Training dictionaries such that they represent each class of signals with minimal loss is called Dictionary Learning (DL)
MOD and K-SVD have been successfully used in reconstruction based applications in image processing like image "denoising", "inpainting"
arXiv Detail & Related papers (2021-11-17T10:45:10Z) - Information-Theoretic Generalization Bounds for Iterative
Semi-Supervised Learning [81.1071978288003]
In particular, we seek to understand the behaviour of the em generalization error of iterative SSL algorithms using information-theoretic principles.
Our theoretical results suggest that when the class conditional variances are not too large, the upper bound on the generalization error decreases monotonically with the number of iterations, but quickly saturates.
arXiv Detail & Related papers (2021-10-03T05:38:49Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - DASVDD: Deep Autoencoding Support Vector Data Descriptor for Anomaly
Detection [9.19194451963411]
Semi-supervised anomaly detection aims to detect anomalies from normal samples using a model that is trained on normal data.
We propose a method, DASVDD, that jointly learns the parameters of an autoencoder while minimizing the volume of an enclosing hyper-sphere on its latent representation.
arXiv Detail & Related papers (2021-06-09T21:57:41Z) - Learning from Incomplete Features by Simultaneous Training of Neural
Networks and Sparse Coding [24.3769047873156]
This paper addresses the problem of training a classifier on a dataset with incomplete features.
We assume that different subsets of features (random or structured) are available at each data instance.
A new supervised learning method is developed to train a general classifier, using only a subset of features per sample.
arXiv Detail & Related papers (2020-11-28T02:20:39Z) - Contrast-weighted Dictionary Learning Based Saliency Detection for
Remote Sensing Images [3.338193485961624]
We propose a novel saliency detection model based on Contrast-weighted Dictionary Learning (CDL) for remote sensing images.
Specifically, the proposed CDL learns salient and non-salient atoms from positive and negative samples to construct a discriminant dictionary.
By using the proposed joint saliency measure, a variety of saliency maps are generated based on the discriminant dictionary.
arXiv Detail & Related papers (2020-04-06T06:49:05Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.