Vacant Holes for Unsupervised Detection of the Outliers in Compact
Latent Representation
- URL: http://arxiv.org/abs/2306.09646v1
- Date: Fri, 16 Jun 2023 06:21:48 GMT
- Title: Vacant Holes for Unsupervised Detection of the Outliers in Compact
Latent Representation
- Authors: Misha Glazunov, Apostolis Zarras
- Abstract summary: Detection of the outliers is pivotal for any machine learning model deployed and operated in real-world.
In this work, we concentrate on the specific type of these models: Variational Autoencoders (VAEs)
- Score: 0.6091702876917279
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Detection of the outliers is pivotal for any machine learning model deployed
and operated in real-world. It is essential for the Deep Neural Networks that
were shown to be overconfident with such inputs. Moreover, even deep generative
models that allow estimation of the probability density of the input fail in
achieving this task. In this work, we concentrate on the specific type of these
models: Variational Autoencoders (VAEs). First, we unveil a significant
theoretical flaw in the assumption of the classical VAE model. Second, we
enforce an accommodating topological property to the image of the deep neural
mapping to the latent space: compactness to alleviate the flaw and obtain the
means to provably bound the image within the determined limits by squeezing
both inliers and outliers together. We enforce compactness using two
approaches: (i) Alexandroff extension and (ii) fixed Lipschitz continuity
constant on the mapping of the encoder of the VAEs. Finally and most
importantly, we discover that the anomalous inputs predominantly tend to land
on the vacant latent holes within the compact space, enabling their successful
identification. For that reason, we introduce a specifically devised score for
hole detection and evaluate the solution against several baseline benchmarks
achieving promising results.
Related papers
- Generative Edge Detection with Stable Diffusion [52.870631376660924]
Edge detection is typically viewed as a pixel-level classification problem mainly addressed by discriminative methods.
We propose a novel approach, named Generative Edge Detector (GED), by fully utilizing the potential of the pre-trained stable diffusion model.
We conduct extensive experiments on multiple datasets and achieve competitive performance.
arXiv Detail & Related papers (2024-10-04T01:52:23Z) - Small Object Detection via Coarse-to-fine Proposal Generation and
Imitation Learning [52.06176253457522]
We propose a two-stage framework tailored for small object detection based on the Coarse-to-fine pipeline and Feature Imitation learning.
CFINet achieves state-of-the-art performance on the large-scale small object detection benchmarks, SODA-D and SODA-A.
arXiv Detail & Related papers (2023-08-18T13:13:09Z) - Do Bayesian Variational Autoencoders Know What They Don't Know? [0.6091702876917279]
The problem of detecting the Out-of-Distribution (OoD) inputs is paramount importance for Deep Neural Networks.
It has been previously shown that even Deep Generative Models that allow estimating the density of the inputs may not be reliable.
This paper investigates three approaches to inference: Markov chain Monte Carlo, Bayes gradient by Backpropagation and Weight Averaging-Gaussian.
arXiv Detail & Related papers (2022-12-29T11:48:01Z) - GLENet: Boosting 3D Object Detectors with Generative Label Uncertainty Estimation [70.75100533512021]
In this paper, we formulate the label uncertainty problem as the diversity of potentially plausible bounding boxes of objects.
We propose GLENet, a generative framework adapted from conditional variational autoencoders, to model the one-to-many relationship between a typical 3D object and its potential ground-truth bounding boxes with latent variables.
The label uncertainty generated by GLENet is a plug-and-play module and can be conveniently integrated into existing deep 3D detectors.
arXiv Detail & Related papers (2022-07-06T06:26:17Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - Toward Certified Robustness Against Real-World Distribution Shifts [65.66374339500025]
We train a generative model to learn perturbations from data and define specifications with respect to the output of the learned model.
A unique challenge arising from this setting is that existing verifiers cannot tightly approximate sigmoid activations.
We propose a general meta-algorithm for handling sigmoid activations which leverages classical notions of counter-example-guided abstraction refinement.
arXiv Detail & Related papers (2022-06-08T04:09:13Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.