Classification and Uncertainty Quantification of Corrupted Data using
Semi-Supervised Autoencoders
- URL: http://arxiv.org/abs/2105.13393v2
- Date: Thu, 20 Apr 2023 20:03:19 GMT
- Title: Classification and Uncertainty Quantification of Corrupted Data using
Semi-Supervised Autoencoders
- Authors: Philipp Joppich, Sebastian Dorn, Oliver De Candido, Wolfgang Utschick,
Jakob Knollm\"uller
- Abstract summary: We present a probabilistic approach to classify strongly corrupted data and quantify uncertainty.
A semi-supervised autoencoder trained on uncorrupted data is the underlying architecture.
We show that the model uncertainty strongly depends on whether the classification is correct or wrong.
- Score: 11.300365160909879
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Parametric and non-parametric classifiers often have to deal with real-world
data, where corruptions like noise, occlusions, and blur are unavoidable -
posing significant challenges. We present a probabilistic approach to classify
strongly corrupted data and quantify uncertainty, despite the model only having
been trained with uncorrupted data. A semi-supervised autoencoder trained on
uncorrupted data is the underlying architecture. We use the decoding part as a
generative model for realistic data and extend it by convolutions, masking, and
additive Gaussian noise to describe imperfections. This constitutes a
statistical inference task in terms of the optimal latent space activations of
the underlying uncorrupted datum. We solve this problem approximately with
Metric Gaussian Variational Inference (MGVI). The supervision of the
autoencoder's latent space allows us to classify corrupted data directly under
uncertainty with the statistically inferred latent space activations.
Furthermore, we demonstrate that the model uncertainty strongly depends on
whether the classification is correct or wrong, setting a basis for a
statistical "lie detector" of the classification. Independent of that, we show
that the generative model can optimally restore the uncorrupted datum by
decoding the inferred latent space activations.
Related papers
- Classification under Nuisance Parameters and Generalized Label Shift in Likelihood-Free Inference [3.507509142413452]
We propose a new method for robust uncertainty quantification that casts classification as a hypothesis testing problem under nuisance parameters.
Our method effectively endows a pre-trained classifier with domain adaptation capabilities and returns valid prediction sets while maintaining high power.
We demonstrate its performance on two challenging scientific problems in biology and astroparticle physics with data from realistic mechanistic models.
arXiv Detail & Related papers (2024-02-08T00:12:18Z) - ALUM: Adversarial Data Uncertainty Modeling from Latent Model
Uncertainty Compensation [25.67258563807856]
We propose a novel method called ALUM to handle the model uncertainty and data uncertainty in a unified scheme.
Our proposed ALUM is model-agnostic which can be easily implemented into any existing deep model with little extra overhead.
arXiv Detail & Related papers (2023-03-29T17:24:12Z) - Posterior Collapse and Latent Variable Non-identifiability [54.842098835445]
We propose a class of latent-identifiable variational autoencoders, deep generative models which enforce identifiability without sacrificing flexibility.
Across synthetic and real datasets, latent-identifiable variational autoencoders outperform existing methods in mitigating posterior collapse and providing meaningful representations of the data.
arXiv Detail & Related papers (2023-01-02T06:16:56Z) - Classification at the Accuracy Limit -- Facing the Problem of Data
Ambiguity [0.0]
We show the theoretical limit for classification accuracy that arises from the overlap of data categories.
We compare emerging data embeddings produced by supervised and unsupervised training, using MNIST and human EEG recordings during sleep.
This suggests that human-defined categories, such as hand-written digits or sleep stages, can indeed be considered as 'natural kinds'
arXiv Detail & Related papers (2022-06-04T07:00:32Z) - Theoretical characterization of uncertainty in high-dimensional linear
classification [24.073221004661427]
We show that uncertainty for learning from limited number of samples of high-dimensional input data and labels can be obtained by the approximate message passing algorithm.
We discuss how over-confidence can be mitigated by appropriately regularising, and show that cross-validating with respect to the loss leads to better calibration than with the 0/1 error.
arXiv Detail & Related papers (2022-02-07T15:32:07Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Evaluating State-of-the-Art Classification Models Against Bayes
Optimality [106.50867011164584]
We show that we can compute the exact Bayes error of generative models learned using normalizing flows.
We use our approach to conduct a thorough investigation of state-of-the-art classification models.
arXiv Detail & Related papers (2021-06-07T06:21:20Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Improving Face Recognition by Clustering Unlabeled Faces in the Wild [77.48677160252198]
We propose a novel identity separation method based on extreme value theory.
It greatly reduces the problems caused by overlapping-identity label noise.
Experiments on both controlled and real settings demonstrate our method's consistent improvements.
arXiv Detail & Related papers (2020-07-14T12:26:50Z) - Tomographic Auto-Encoder: Unsupervised Bayesian Recovery of Corrupted
Data [4.725669222165439]
We propose a new probabilistic method for unsupervised recovery of corrupted data.
Given a large ensemble of degraded samples, our method recovers accurate posteriors of clean values.
We test our model in a data recovery task under the common setting of missing values and noise.
arXiv Detail & Related papers (2020-06-30T16:18:16Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.