Outlier Detection Using Generative Models with Theoretical Performance
Guarantees
- URL: http://arxiv.org/abs/2310.09999v1
- Date: Mon, 16 Oct 2023 01:25:34 GMT
- Title: Outlier Detection Using Generative Models with Theoretical Performance
Guarantees
- Authors: Jirong Yi, Jingchao Gao, Tianming Wang, Xiaodong Wu, Weiyu Xu
- Abstract summary: We establish theoretical recovery guarantees for reconstruction of signals using generative models in the presence of outliers.
Our results are applicable to both linear generator neural networks and the nonlinear generator neural networks with an arbitrary number of layers.
- Score: 11.985270449383272
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper considers the problem of recovering signals modeled by generative
models from linear measurements contaminated with sparse outliers. We propose
an outlier detection approach for reconstructing the ground-truth signals
modeled by generative models under sparse outliers. We establish theoretical
recovery guarantees for reconstruction of signals using generative models in
the presence of outliers, giving lower bounds on the number of correctable
outliers. Our results are applicable to both linear generator neural networks
and the nonlinear generator neural networks with an arbitrary number of layers.
We propose an iterative alternating direction method of multipliers (ADMM)
algorithm for solving the outlier detection problem via $\ell_1$ norm
minimization, and a gradient descent algorithm for solving the outlier
detection problem via squared $\ell_1$ norm minimization. We conduct extensive
experiments using variational auto-encoder and deep convolutional generative
adversarial networks, and the experimental results show that the signals can be
successfully reconstructed under outliers using our approach. Our approach
outperforms the traditional Lasso and $\ell_2$ minimization approach.
Related papers
- Precise asymptotics of reweighted least-squares algorithms for linear diagonal networks [15.074950361970194]
We provide a unified analysis for a family of algorithms that encompasses IRLS, the recently proposed linlin-RFM algorithm, and the alternating diagonal neural networks.
We show that, with appropriately chosen reweighting policy, a handful of sparse structures can achieve favorable performance.
We also show that leveraging this in the reweighting scheme provably improves test error compared to coordinate-wise reweighting.
arXiv Detail & Related papers (2024-06-04T20:37:17Z) - Spot The Odd One Out: Regularized Complete Cycle Consistent Anomaly Detector GAN [4.5123329001179275]
This study presents an adversarial method for anomaly detection in real-world applications, leveraging the power of generative adversarial neural networks (GANs)
Previous methods suffer from the high variance between class-wise accuracy which leads to not being applicable for all types of anomalies.
The proposed method named RCALAD tries to solve this problem by introducing a novel discriminator to the structure, which results in a more efficient training process.
arXiv Detail & Related papers (2023-04-16T13:05:39Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Sparsely constrained neural networks for model discovery of PDEs [0.0]
We present a modular framework that determines the sparsity pattern of a deep-learning based surrogate using any sparse regression technique.
We show how a different network architecture and sparsity estimator improve model discovery accuracy and convergence on several benchmark examples.
arXiv Detail & Related papers (2020-11-09T11:02:40Z) - Improving predictions of Bayesian neural nets via local linearization [79.21517734364093]
We argue that the Gauss-Newton approximation should be understood as a local linearization of the underlying Bayesian neural network (BNN)
Because we use this linearized model for posterior inference, we should also predict using this modified model instead of the original one.
We refer to this modified predictive as "GLM predictive" and show that it effectively resolves common underfitting problems of the Laplace approximation.
arXiv Detail & Related papers (2020-08-19T12:35:55Z) - When and How Can Deep Generative Models be Inverted? [28.83334026125828]
Deep generative models (GANs and VAEs) have been developed quite extensively in recent years.
We define conditions that are applicable to any inversion algorithm (gradient descent, deep encoder, etc.) under which such generative models are invertible.
We show that our method outperforms gradient descent when inverting such generators, both for clean and corrupted signals.
arXiv Detail & Related papers (2020-06-28T09:37:52Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z) - Sample Complexity Bounds for 1-bit Compressive Sensing and Binary Stable
Embeddings with Generative Priors [52.06292503723978]
Motivated by advances in compressive sensing with generative models, we study the problem of 1-bit compressive sensing with generative models.
We first consider noiseless 1-bit measurements, and provide sample complexity bounds for approximate recovery under i.i.d.Gaussian measurements.
We demonstrate that the Binary $epsilon$-Stable Embedding property, which characterizes the robustness of the reconstruction to measurement errors and noise, also holds for 1-bit compressive sensing with Lipschitz continuous generative models.
arXiv Detail & Related papers (2020-02-05T09:44:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.