Autoencoder Attractors for Uncertainty Estimation
- URL: http://arxiv.org/abs/2204.00382v1
- Date: Fri, 1 Apr 2022 12:10:06 GMT
- Title: Autoencoder Attractors for Uncertainty Estimation
- Authors: Steve Dias Da Cruz, Bertram Taetz, Thomas Stifter, Didier Stricker
- Abstract summary: We propose a novel approach for uncertainty estimation based on autoencoder models.
We evaluate our approach on several dataset combinations as well as on an industrial application for occupant classification in the vehicle interior.
- Score: 13.618797548020462
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The reliability assessment of a machine learning model's prediction is an
important quantity for the deployment in safety critical applications. Not only
can it be used to detect novel sceneries, either as out-of-distribution or
anomaly sample, but it also helps to determine deficiencies in the training
data distribution. A lot of promising research directions have either proposed
traditional methods like Gaussian processes or extended deep learning based
approaches, for example, by interpreting them from a Bayesian point of view. In
this work we propose a novel approach for uncertainty estimation based on
autoencoder models: The recursive application of a previously trained
autoencoder model can be interpreted as a dynamical system storing training
examples as attractors. While input images close to known samples will converge
to the same or similar attractor, input samples containing unknown features are
unstable and converge to different training samples by potentially removing or
changing characteristic features. The use of dropout during training and
inference leads to a family of similar dynamical systems, each one being robust
on samples close to the training distribution but unstable on new features.
Either the model reliably removes these features or the resulting instability
can be exploited to detect problematic input samples. We evaluate our approach
on several dataset combinations as well as on an industrial application for
occupant classification in the vehicle interior for which we additionally
release a new synthetic dataset.
Related papers
- Robust Outlier Rejection for 3D Registration with Variational Bayes [70.98659381852787]
We develop a novel variational non-local network-based outlier rejection framework for robust alignment.
We propose a voting-based inlier searching strategy to cluster the high-quality hypothetical inliers for transformation estimation.
arXiv Detail & Related papers (2023-04-04T03:48:56Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - An Additive Instance-Wise Approach to Multi-class Model Interpretation [53.87578024052922]
Interpretable machine learning offers insights into what factors drive a certain prediction of a black-box system.
Existing methods mainly focus on selecting explanatory input features, which follow either locally additive or instance-wise approaches.
This work exploits the strengths of both methods and proposes a global framework for learning local explanations simultaneously for multiple target classes.
arXiv Detail & Related papers (2022-07-07T06:50:27Z) - Monitoring Model Deterioration with Explainable Uncertainty Estimation
via Non-parametric Bootstrap [0.0]
Monitoring machine learning models once they are deployed is challenging.
It is even more challenging to decide when to retrain models in real-case scenarios when labeled data is beyond reach.
In this work, we use non-parametric bootstrapped uncertainty estimates and SHAP values to provide explainable uncertainty estimation.
arXiv Detail & Related papers (2022-01-27T17:23:04Z) - Dense Out-of-Distribution Detection by Robust Learning on Synthetic
Negative Data [1.7474352892977458]
We show how to detect out-of-distribution anomalies in road-driving scenes and remote sensing imagery.
We leverage a jointly trained normalizing flow due to coverage-oriented learning objective and the capability to generate samples at different resolutions.
The resulting models set the new state of the art on benchmarks for out-of-distribution detection in road-driving scenes and remote sensing imagery.
arXiv Detail & Related papers (2021-12-23T20:35:10Z) - Robust Out-of-Distribution Detection on Deep Probabilistic Generative
Models [0.06372261626436676]
Out-of-distribution (OOD) detection is an important task in machine learning systems.
Deep probabilistic generative models facilitate OOD detection by estimating the likelihood of a data sample.
We propose a new detection metric that operates without outlier exposure.
arXiv Detail & Related papers (2021-06-15T06:36:10Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - Understanding Classifier Mistakes with Generative Models [88.20470690631372]
Deep neural networks are effective on supervised learning tasks, but have been shown to be brittle.
In this paper, we leverage generative models to identify and characterize instances where classifiers fail to generalize.
Our approach is agnostic to class labels from the training set which makes it applicable to models trained in a semi-supervised way.
arXiv Detail & Related papers (2020-10-05T22:13:21Z) - Open Set Recognition with Conditional Probabilistic Generative Models [51.40872765917125]
We propose Conditional Probabilistic Generative Models (CPGM) for open set recognition.
CPGM can detect unknown samples but also classify known classes by forcing different latent features to approximate conditional Gaussian distributions.
Experiment results on multiple benchmark datasets reveal that the proposed method significantly outperforms the baselines.
arXiv Detail & Related papers (2020-08-12T06:23:49Z) - Uncertainty-Aware Deep Classifiers using Generative Models [7.486679152591502]
Deep neural networks are often ignorant about what they do not know and overconfident when they make uninformed predictions.
Some recent approaches quantify uncertainty directly by training the model to output high uncertainty for the data samples close to class boundaries or from the outside of the training distribution.
We develop a novel neural network model that is able to express both aleatoric and epistemic uncertainty to distinguish decision boundary and out-of-distribution regions.
arXiv Detail & Related papers (2020-06-07T15:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.