DenseHybrid: Hybrid Anomaly Detection for Dense Open-set Recognition
- URL: http://arxiv.org/abs/2207.02606v1
- Date: Wed, 6 Jul 2022 11:48:50 GMT
- Title: DenseHybrid: Hybrid Anomaly Detection for Dense Open-set Recognition
- Authors: Matej Grci\'c, Petra Bevandi\'c, Sini\v{s}a \v{S}egvi\'c
- Abstract summary: Anomaly detection can be conceived either through generative modelling of regular training data or by discriminating with respect to negative training data.
This paper presents a novel hybrid anomaly score which allows dense open-set recognition on large natural images.
Experiments evaluate our contributions on standard dense anomaly detection benchmarks as well as in terms of open-mIoU - a novel metric for dense open-set performance.
- Score: 1.278093617645299
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Anomaly detection can be conceived either through generative modelling of
regular training data or by discriminating with respect to negative training
data. These two approaches exhibit different failure modes. Consequently,
hybrid algorithms present an attractive research goal. Unfortunately, dense
anomaly detection requires translational equivariance and very large input
resolutions. These requirements disqualify all previous hybrid approaches to
the best of our knowledge. We therefore design a novel hybrid algorithm based
on reinterpreting discriminative logits as a logarithm of the unnormalized
joint distribution $\hat{p}(\mathbf{x}, \mathbf{y})$. Our model builds on a
shared convolutional representation from which we recover three dense
predictions: i) the closed-set class posterior $P(\mathbf{y}|\mathbf{x})$, ii)
the dataset posterior $P(d_{in}|\mathbf{x})$, iii) unnormalized data likelihood
$\hat{p}(\mathbf{x})$. The latter two predictions are trained both on the
standard training data and on a generic negative dataset. We blend these two
predictions into a hybrid anomaly score which allows dense open-set recognition
on large natural images. We carefully design a custom loss for the data
likelihood in order to avoid backpropagation through the untractable
normalizing constant $Z(\theta)$. Experiments evaluate our contributions on
standard dense anomaly detection benchmarks as well as in terms of open-mIoU -
a novel metric for dense open-set performance. Our submissions achieve
state-of-the-art performance despite neglectable computational overhead over
the standard semantic segmentation baseline.
Related papers
- Rejection via Learning Density Ratios [50.91522897152437]
Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions.
We propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.
Our framework is tested empirically over clean and noisy datasets.
arXiv Detail & Related papers (2024-05-29T01:32:17Z) - Adversarial Anomaly Detection using Gaussian Priors and Nonlinear
Anomaly Scores [0.21847754147782888]
Anomaly detection in imbalanced datasets is a frequent and crucial problem, especially in the medical domain.
By combining the generative stability of a $beta$-variational autoencoder (VAE) with the discriminative strengths of generative adversarial networks (GANs), we propose a novel model, $beta$-VAEGAN.
We investigate methods for composing anomaly scores based on the discriminative and reconstructive capabilities of our model.
arXiv Detail & Related papers (2023-10-27T12:24:08Z) - Conformalization of Sparse Generalized Linear Models [2.1485350418225244]
Conformal prediction method estimates a confidence set for $y_n+1$ that is valid for any finite sample size.
Although attractive, computing such a set is computationally infeasible in most regression problems.
We show how our path-following algorithm accurately approximates conformal prediction sets.
arXiv Detail & Related papers (2023-07-11T08:36:12Z) - Hybrid Open-set Segmentation with Synthetic Negative Data [0.0]
Open-set segmentation can be conceived by complementing closed-set classification with anomaly detection.
We propose a novel anomaly score that fuses generative and discriminative cues.
Experiments reveal strong open-set performance in spite of negligible computational overhead.
arXiv Detail & Related papers (2023-01-19T11:02:44Z) - A Robust and Flexible EM Algorithm for Mixtures of Elliptical
Distributions with Missing Data [71.9573352891936]
This paper tackles the problem of missing data imputation for noisy and non-Gaussian data.
A new EM algorithm is investigated for mixtures of elliptical distributions with the property of handling potential missing data.
Experimental results on synthetic data demonstrate that the proposed algorithm is robust to outliers and can be used with non-Gaussian data.
arXiv Detail & Related papers (2022-01-28T10:01:37Z) - Towards an Understanding of Benign Overfitting in Neural Networks [104.2956323934544]
Modern machine learning models often employ a huge number of parameters and are typically optimized to have zero training loss.
We examine how these benign overfitting phenomena occur in a two-layer neural network setting.
We show that it is possible for the two-layer ReLU network interpolator to achieve a near minimax-optimal learning rate.
arXiv Detail & Related papers (2021-06-06T19:08:53Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Binary Classification of Gaussian Mixtures: Abundance of Support
Vectors, Benign Overfitting and Regularization [39.35822033674126]
We study binary linear classification under a generative Gaussian mixture model.
We derive novel non-asymptotic bounds on the classification error of the latter.
Our results extend to a noisy model with constant probability noise flips.
arXiv Detail & Related papers (2020-11-18T07:59:55Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z) - Evaluating Prediction-Time Batch Normalization for Robustness under
Covariate Shift [81.74795324629712]
We call prediction-time batch normalization, which significantly improves model accuracy and calibration under covariate shift.
We show that prediction-time batch normalization provides complementary benefits to existing state-of-the-art approaches for improving robustness.
The method has mixed results when used alongside pre-training, and does not seem to perform as well under more natural types of dataset shift.
arXiv Detail & Related papers (2020-06-19T05:08:43Z) - Bayesian Semi-supervised Multi-category Classification under Nonparanormality [2.307581190124002]
Semi-supervised learning is a model training method that uses both labeled and unlabeled data.
This paper proposes a fully Bayes semi-supervised learning algorithm that can be applied to any multi-category classification problem.
arXiv Detail & Related papers (2020-01-11T21:31:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.