Towards a Theoretical Understanding of the Robustness of Variational
Autoencoders
- URL: http://arxiv.org/abs/2007.07365v3
- Date: Fri, 29 Jan 2021 18:33:07 GMT
- Title: Towards a Theoretical Understanding of the Robustness of Variational
Autoencoders
- Authors: Alexander Camuto, Matthew Willetts, Stephen Roberts, Chris Holmes, Tom
Rainforth
- Abstract summary: We make inroads into understanding the robustness of Variational Autoencoders (VAEs) to adversarial attacks and other input perturbations.
We develop a novel criterion for robustness in probabilistic models: $r$-robustness.
We show that VAEs trained using disentangling methods score well under our robustness metrics.
- Score: 82.68133908421792
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We make inroads into understanding the robustness of Variational Autoencoders
(VAEs) to adversarial attacks and other input perturbations. While previous
work has developed algorithmic approaches to attacking and defending VAEs,
there remains a lack of formalization for what it means for a VAE to be robust.
To address this, we develop a novel criterion for robustness in probabilistic
models: $r$-robustness. We then use this to construct the first theoretical
results for the robustness of VAEs, deriving margins in the input space for
which we can provide guarantees about the resulting reconstruction. Informally,
we are able to define a region within which any perturbation will produce a
reconstruction that is similar to the original reconstruction. To support our
analysis, we show that VAEs trained using disentangling methods not only score
well under our robustness metrics, but that the reasons for this can be
interpreted through our theoretical results.
Related papers
- Robust VAEs via Generating Process of Noise Augmented Data [9.366139389037489]
This paper introduces a novel framework that enhances robustness by regularizing the latent space divergence between original and noise-augmented data.
Our empirical evaluations demonstrate that this approach, termed Robust Augmented Variational Auto-ENcoder (RAVEN), yields superior performance in resisting adversarial inputs.
arXiv Detail & Related papers (2024-07-26T09:55:34Z) - Rigorous Probabilistic Guarantees for Robust Counterfactual Explanations [80.86128012438834]
We show for the first time that computing the robustness of counterfactuals with respect to plausible model shifts is NP-complete.
We propose a novel probabilistic approach which is able to provide tight estimates of robustness with strong guarantees.
arXiv Detail & Related papers (2024-07-10T09:13:11Z) - Regularizing Variational Autoencoder with Diversity and Uncertainty
Awareness [61.827054365139645]
Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference.
We propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space.
arXiv Detail & Related papers (2021-10-24T07:58:13Z) - CC-Cert: A Probabilistic Approach to Certify General Robustness of
Neural Networks [58.29502185344086]
In safety-critical machine learning applications, it is crucial to defend models against adversarial attacks.
It is important to provide provable guarantees for deep learning models against semantically meaningful input transformations.
We propose a new universal probabilistic certification approach based on Chernoff-Cramer bounds.
arXiv Detail & Related papers (2021-09-22T12:46:04Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Variational Encoder-based Reliable Classification [5.161531917413708]
We propose an Epistemic (EC) that can provide justification of its belief using support from the training dataset as well as quality of reconstruction.
Our approach is based on modified variational autoencoders that can identify a semantically meaningful low-dimensional space.
Our results demonstrate improved reliability of predictions and robust identification of samples with adversarial attacks.
arXiv Detail & Related papers (2020-02-19T17:05:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.