Novelty Detection via Robust Variational Autoencoding
- URL: http://arxiv.org/abs/2006.05534v3
- Date: Wed, 7 Oct 2020 00:56:34 GMT
- Title: Novelty Detection via Robust Variational Autoencoding
- Authors: Chieh-Hsin Lai, Dongmian Zou and Gilad Lerman
- Abstract summary: We propose a new method for novelty detection that can tolerate high corruption of the training points.
Our method trains a robust variational autoencoder (VAE), which aims to generate a model for the uncorrupted training points.
- Score: 13.664682865991255
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new method for novelty detection that can tolerate high
corruption of the training points, whereas previous works assumed either no or
very low corruption. Our method trains a robust variational autoencoder (VAE),
which aims to generate a model for the uncorrupted training points. To gain
robustness to high corruption, we incorporate the following four changes to the
common VAE: 1. Extracting crucial features of the latent code by a carefully
designed dimension reduction component for distributions; 2. Modeling the
latent distribution as a mixture of Gaussian low-rank inliers and full-rank
outliers, where the testing only uses the inlier model; 3. Applying the
Wasserstein-1 metric for regularization, instead of the Kullback-Leibler (KL)
divergence; and 4. Using a least absolute deviation error for reconstruction.
We establish both robustness to outliers and suitability to low-rank modeling
of the Wasserstein metric as opposed to the KL divergence. We illustrate
state-of-the-art results on standard benchmarks for novelty detection.
Related papers
- Selective Learning: Towards Robust Calibration with Dynamic Regularization [79.92633587914659]
Miscalibration in deep learning refers to there is a discrepancy between the predicted confidence and performance.
We introduce Dynamic Regularization (DReg) which aims to learn what should be learned during training thereby circumventing the confidence adjusting trade-off.
arXiv Detail & Related papers (2024-02-13T11:25:20Z) - Towards Calibrated Robust Fine-Tuning of Vision-Language Models [97.19901765814431]
This work proposes a robust fine-tuning method that improves both OOD accuracy and confidence calibration simultaneously in vision language models.
We show that both OOD classification and OOD calibration errors have a shared upper bound consisting of two terms of ID data.
Based on this insight, we design a novel framework that conducts fine-tuning with a constrained multimodal contrastive loss enforcing a larger smallest singular value.
arXiv Detail & Related papers (2023-11-03T05:41:25Z) - Training Normalizing Flows with the Precision-Recall Divergence [73.92251251511199]
We show that achieving a specified precision-recall trade-off corresponds to minimising -divergences from a family we call the em PR-divergences
We propose a novel generative model that is able to train a normalizing flow to minimise any -divergence, and in particular, achieve a given precision-recall trade-off.
arXiv Detail & Related papers (2023-02-01T17:46:47Z) - Robustness and Accuracy Could Be Reconcilable by (Proper) Definition [109.62614226793833]
The trade-off between robustness and accuracy has been widely studied in the adversarial literature.
We find that it may stem from the improperly defined robust error, which imposes an inductive bias of local invariance.
By definition, SCORE facilitates the reconciliation between robustness and accuracy, while still handling the worst-case uncertainty.
arXiv Detail & Related papers (2022-02-21T10:36:09Z) - The KFIoU Loss for Rotated Object Detection [115.334070064346]
In this paper, we argue that one effective alternative is to devise an approximate loss who can achieve trend-level alignment with SkewIoU loss.
Specifically, we model the objects as Gaussian distribution and adopt Kalman filter to inherently mimic the mechanism of SkewIoU.
The resulting new loss called KFIoU is easier to implement and works better compared with exact SkewIoU.
arXiv Detail & Related papers (2022-01-29T10:54:57Z) - Toward Minimal Misalignment at Minimal Cost in One-Stage and Anchor-Free
Object Detection [6.486325109549893]
classification and regression branches have different sensibility to the features from the same scale level and the same spatial location.
We propose a point-based prediction method, which is based on the assumption that the high classification confidence point has the high regression quality, leads to the misalignment problem.
We aim to resolve the phenomenon at minimal cost: a minor adjustment of the head network and a new label assignment method replacing the rigid one.
arXiv Detail & Related papers (2021-12-16T14:22:13Z) - Tomographic Auto-Encoder: Unsupervised Bayesian Recovery of Corrupted
Data [4.725669222165439]
We propose a new probabilistic method for unsupervised recovery of corrupted data.
Given a large ensemble of degraded samples, our method recovers accurate posteriors of clean values.
We test our model in a data recovery task under the common setting of missing values and noise.
arXiv Detail & Related papers (2020-06-30T16:18:16Z) - Interpreting Rate-Distortion of Variational Autoencoder and Using Model
Uncertainty for Anomaly Detection [5.491655566898372]
We build a scalable machine learning system for unsupervised anomaly detection via representation learning.
We revisit VAE from the perspective of information theory to provide some theoretical foundations on using the reconstruction error.
We show empirically the competitive performance of our approach on benchmark datasets.
arXiv Detail & Related papers (2020-05-05T00:03:48Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.