Revisiting Role of Autoencoders in Adversarial Settings
- URL: http://arxiv.org/abs/2005.10750v1
- Date: Thu, 21 May 2020 16:01:23 GMT
- Title: Revisiting Role of Autoencoders in Adversarial Settings
- Authors: Byeong Cheon Kim, Jung Uk Kim, Hakmin Lee, Yong Man Ro
- Abstract summary: This paper presents the inherent property of adversarial robustness in the autoencoders.
We believe that our discovery of the adversarial robustness of the autoencoders can provide clues to the future research and applications for adversarial defense.
- Score: 32.22707594954084
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To combat against adversarial attacks, autoencoder structure is widely used
to perform denoising which is regarded as gradient masking. In this paper, we
revisit the role of autoencoders in adversarial settings. Through the
comprehensive experimental results and analysis, this paper presents the
inherent property of adversarial robustness in the autoencoders. We also found
that autoencoders may use robust features that cause inherent adversarial
robustness. We believe that our discovery of the adversarial robustness of the
autoencoders can provide clues to the future research and applications for
adversarial defense.
Related papers
- Anomaly Detection in OKTA Logs using Autoencoders [0.0]
Okta logs are used to detect cybersecurity events using various rule-based models with restricted look back periods.
These functions have limitations, such as a limited retrospective analysis, a predefined rule set, and susceptibility to generating false positives.
We adopt unsupervised techniques, specifically employing autoencoders.
arXiv Detail & Related papers (2024-11-11T19:15:05Z) - Downstream-agnostic Adversarial Examples [66.8606539786026]
AdvEncoder is first framework for generating downstream-agnostic universal adversarial examples based on pre-trained encoder.
Unlike traditional adversarial example works, the pre-trained encoder only outputs feature vectors rather than classification labels.
Our results show that an attacker can successfully attack downstream tasks without knowing either the pre-training dataset or the downstream dataset.
arXiv Detail & Related papers (2023-07-23T10:16:47Z) - On the Adversarial Robustness of Generative Autoencoders in the Latent
Space [22.99128324197949]
We provide the first study on the adversarial robustness of generative autoencoders in the latent space.
Specifically, we empirically demonstrate the latent vulnerability of popular generative autoencoders through attacks in the latent space.
We identify a potential trade-off between the adversarial robustness and the degree of the disentanglement of the latent codes.
arXiv Detail & Related papers (2023-07-05T10:53:49Z) - Is Semantic Communications Secure? A Tale of Multi-Domain Adversarial
Attacks [70.51799606279883]
We introduce test-time adversarial attacks on deep neural networks (DNNs) for semantic communications.
We show that it is possible to change the semantics of the transferred information even when the reconstruction loss remains low.
arXiv Detail & Related papers (2022-12-20T17:13:22Z) - On a Mechanism Framework of Autoencoders [0.0]
This paper proposes a theoretical framework on the mechanism of autoencoders.
Results of ReLU autoencoders are generalized to some non-ReLU cases.
Compared to PCA and decision trees, the advantages of (generalized) autoencoders on dimensionality reduction and classification are demonstrated.
arXiv Detail & Related papers (2022-08-15T03:51:40Z) - How to boost autoencoders? [13.166222736288432]
We discuss the challenges associated with boosting autoencoders and propose a framework to overcome them.
The usefulness of the boosted ensemble is demonstrated in two applications that widely employ autoencoders: anomaly detection and clustering.
arXiv Detail & Related papers (2021-10-28T17:21:25Z) - Diagnosing Vulnerability of Variational Auto-Encoders to Adversarial
Attacks [80.73580820014242]
We show how to modify data point to obtain a prescribed latent code (supervised attack) or just get a drastically different code (unsupervised attack)
We examine the influence of model modifications on the robustness of VAEs and suggest metrics to quantify it.
arXiv Detail & Related papers (2021-03-10T14:23:20Z) - Detection of Adversarial Supports in Few-shot Classifiers Using Feature
Preserving Autoencoders and Self-Similarity [89.26308254637702]
We propose a detection strategy to highlight adversarial support sets.
We make use of feature preserving autoencoder filtering and also the concept of self-similarity of a support set to perform this detection.
Our method is attack-agnostic and also the first to explore detection for few-shot classifiers to the best of our knowledge.
arXiv Detail & Related papers (2020-12-09T14:13:41Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - Double Backpropagation for Training Autoencoders against Adversarial
Attack [15.264115499966413]
This paper focuses on the adversarial attack on autoencoders.
We propose to adopt double backpropagation (DBP) to secure autoencoder such as VAE and DRAW.
arXiv Detail & Related papers (2020-03-04T05:12:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.