On the Adversarial Robustness of Generative Autoencoders in the Latent
Space
- URL: http://arxiv.org/abs/2307.02202v1
- Date: Wed, 5 Jul 2023 10:53:49 GMT
- Title: On the Adversarial Robustness of Generative Autoencoders in the Latent
Space
- Authors: Mingfei Lu and Badong Chen
- Abstract summary: We provide the first study on the adversarial robustness of generative autoencoders in the latent space.
Specifically, we empirically demonstrate the latent vulnerability of popular generative autoencoders through attacks in the latent space.
We identify a potential trade-off between the adversarial robustness and the degree of the disentanglement of the latent codes.
- Score: 22.99128324197949
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The generative autoencoders, such as the variational autoencoders or the
adversarial autoencoders, have achieved great success in lots of real-world
applications, including image generation, and signal communication.
However, little concern has been devoted to their robustness during practical
deployment.
Due to the probabilistic latent structure, variational autoencoders (VAEs)
may confront problems such as a mismatch between the posterior distribution of
the latent and real data manifold, or discontinuity in the posterior
distribution of the latent.
This leaves a back door for malicious attackers to collapse VAEs from the
latent space, especially in scenarios where the encoder and decoder are used
separately, such as communication and compressed sensing.
In this work, we provide the first study on the adversarial robustness of
generative autoencoders in the latent space.
Specifically, we empirically demonstrate the latent vulnerability of popular
generative autoencoders through attacks in the latent space.
We also evaluate the difference between variational autoencoders and their
deterministic variants and observe that the latter performs better in latent
robustness.
Meanwhile, we identify a potential trade-off between the adversarial
robustness and the degree of the disentanglement of the latent codes.
Additionally, we also verify the feasibility of improvement for the latent
robustness of VAEs through adversarial training.
In summary, we suggest concerning the adversarial latent robustness of the
generative autoencoders, analyze several robustness-relative issues, and give
some insights into a series of key challenges.
Related papers
- Concurrent Density Estimation with Wasserstein Autoencoders: Some
Statistical Insights [20.894503281724052]
Wasserstein Autoencoders (WAEs) have been a pioneering force in the realm of deep generative models.
Our work is an attempt to offer a theoretical understanding of the machinery behind WAEs.
arXiv Detail & Related papers (2023-12-11T18:27:25Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - Language-Driven Anchors for Zero-Shot Adversarial Robustness [25.160195547250655]
We propose a Language-driven, Anchor-based Adversarial Training strategy.
By leveraging the semantic consistency of the text encoders, LAAT aims to enhance the adversarial robustness of the image model.
We show that LAAT significantly improves zero-shot adversarial robustness over state-of-the-art methods.
arXiv Detail & Related papers (2023-01-30T17:34:43Z) - Benign Autoencoders [0.0]
We formalize the problem of finding the optimal encoder-decoder pair and characterize its solution, which we name the "benign autoencoder" (BAE)
We prove that BAE projects data onto a manifold whose dimension is the optimal compressibility dimension of the generative problem.
As an illustration, we show how BAE can find optimal, low-dimensional latent representations that improve the performance of a discriminator under a distribution shift.
arXiv Detail & Related papers (2022-10-02T21:36:27Z) - Exploring Robustness of Unsupervised Domain Adaptation in Semantic
Segmentation [74.05906222376608]
We propose adversarial self-supervision UDA (or ASSUDA) that maximizes the agreement between clean images and their adversarial examples by a contrastive loss in the output space.
This paper is rooted in two observations: (i) the robustness of UDA methods in semantic segmentation remains unexplored, which pose a security concern in this field; and (ii) although commonly used self-supervision (e.g., rotation and jigsaw) benefits image tasks such as classification and recognition, they fail to provide the critical supervision signals that could learn discriminative representation for segmentation tasks.
arXiv Detail & Related papers (2021-05-23T01:50:44Z) - Anomaly Detection Based on Selection and Weighting in Latent Space [73.01328671569759]
We propose a novel selection-and-weighting-based anomaly detection framework called SWAD.
Experiments on both benchmark and real-world datasets have shown the effectiveness and superiority of SWAD.
arXiv Detail & Related papers (2021-03-08T10:56:38Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - Towards a Theoretical Understanding of the Robustness of Variational
Autoencoders [82.68133908421792]
We make inroads into understanding the robustness of Variational Autoencoders (VAEs) to adversarial attacks and other input perturbations.
We develop a novel criterion for robustness in probabilistic models: $r$-robustness.
We show that VAEs trained using disentangling methods score well under our robustness metrics.
arXiv Detail & Related papers (2020-07-14T21:22:29Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Revisiting Role of Autoencoders in Adversarial Settings [32.22707594954084]
This paper presents the inherent property of adversarial robustness in the autoencoders.
We believe that our discovery of the adversarial robustness of the autoencoders can provide clues to the future research and applications for adversarial defense.
arXiv Detail & Related papers (2020-05-21T16:01:23Z) - Double Backpropagation for Training Autoencoders against Adversarial
Attack [15.264115499966413]
This paper focuses on the adversarial attack on autoencoders.
We propose to adopt double backpropagation (DBP) to secure autoencoder such as VAE and DRAW.
arXiv Detail & Related papers (2020-03-04T05:12:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.