Deterministic Gaussian Averaged Neural Networks
- URL: http://arxiv.org/abs/2006.06061v1
- Date: Wed, 10 Jun 2020 20:53:31 GMT
- Title: Deterministic Gaussian Averaged Neural Networks
- Authors: Ryan Campbell, Chris Finlay, Adam M Oberman
- Abstract summary: We present a deterministic method to compute the Gaussian average of neural networks used in regression and classification.
We use this equivalence to certify models which perform well on clean data but are not robust to adversarial perturbations.
- Score: 7.51557557629519
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a deterministic method to compute the Gaussian average of neural
networks used in regression and classification. Our method is based on an
equivalence between training with a particular regularized loss, and the
expected values of Gaussian averages. We use this equivalence to certify models
which perform well on clean data but are not robust to adversarial
perturbations. In terms of certified accuracy and adversarial robustness, our
method is comparable to known stochastic methods such as randomized smoothing,
but requires only a single model evaluation during inference.
Related papers
- A Mallows-like Criterion for Anomaly Detection with Random Forest Implementation [7.569443648362081]
This paper proposes a novel criterion to select the weights on aggregation of multiple models, wherein the focal loss function accounts for the classification of extremely imbalanced data.
We have evaluated the proposed method on benchmark datasets across various domains, including network intrusion.
arXiv Detail & Related papers (2024-05-29T09:36:57Z) - Symmetric Q-learning: Reducing Skewness of Bellman Error in Online
Reinforcement Learning [55.75959755058356]
In deep reinforcement learning, estimating the value function is essential to evaluate the quality of states and actions.
A recent study suggested that the error distribution for training the value function is often skewed because of the properties of the Bellman operator.
We proposed a method called Symmetric Q-learning, in which the synthetic noise generated from a zero-mean distribution is added to the target values to generate a Gaussian error distribution.
arXiv Detail & Related papers (2024-03-12T14:49:19Z) - Gaussian Latent Representations for Uncertainty Estimation using
Mahalanobis Distance in Deep Classifiers [1.5088605208312555]
We present a lightweight, fast, and high-performance regularization method for Mahalanobis distance-based uncertainty prediction.
We show the applicability of our method to a real-life computer vision use case on microorganism classification.
arXiv Detail & Related papers (2023-05-23T09:18:47Z) - On double-descent in uncertainty quantification in overparametrized
models [24.073221004661427]
Uncertainty quantification is a central challenge in reliable and trustworthy machine learning.
We show a trade-off between classification accuracy and calibration, unveiling a double descent like behavior in the calibration curve of optimally regularized estimators.
This is in contrast with the empirical Bayes method, which we show to be well calibrated in our setting despite the higher generalization error and overparametrization.
arXiv Detail & Related papers (2022-10-23T16:01:08Z) - Confidence estimation of classification based on the distribution of the
neural network output layer [4.529188601556233]
One of the most common problems preventing the application of prediction models in the real world is lack of generalization.
We propose novel methods that estimate uncertainty of particular predictions generated by a neural network classification model.
The proposed methods infer the confidence of a particular prediction based on the distribution of the logit values corresponding to this prediction.
arXiv Detail & Related papers (2022-10-14T12:32:50Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Evaluating State-of-the-Art Classification Models Against Bayes
Optimality [106.50867011164584]
We show that we can compute the exact Bayes error of generative models learned using normalizing flows.
We use our approach to conduct a thorough investigation of state-of-the-art classification models.
arXiv Detail & Related papers (2021-06-07T06:21:20Z) - Scalable Cross Validation Losses for Gaussian Process Models [22.204619587725208]
We use Polya-Gamma auxiliary variables and variational inference to accommodate binary and multi-class classification.
We find that our method offers fast training and excellent predictive performance.
arXiv Detail & Related papers (2021-05-24T21:01:47Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.