MAD-VAE: Manifold Awareness Defense Variational Autoencoder
- URL: http://arxiv.org/abs/2011.01755v1
- Date: Sat, 31 Oct 2020 09:04:25 GMT
- Title: MAD-VAE: Manifold Awareness Defense Variational Autoencoder
- Authors: Frederick Morlock, Dingsu Wang
- Abstract summary: We introduce several methods to improve the robustness of defense models.
With extensive experiments on MNIST data set, we have demonstrated the effectiveness of our algorithms.
We also discuss the applicability of existing adversarial latent space attacks as they may have a significant flaw.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although deep generative models such as Defense-GAN and Defense-VAE have made
significant progress in terms of adversarial defenses of image classification
neural networks, several methods have been found to circumvent these defenses.
Based on Defense-VAE, in our research we introduce several methods to improve
the robustness of defense models. The methods introduced in this paper are
straight forward yet show promise over the vanilla Defense-VAE. With extensive
experiments on MNIST data set, we have demonstrated the effectiveness of our
algorithms against different attacks. Our experiments also include attacks on
the latent space of the defensive model. We also discuss the applicability of
existing adversarial latent space attacks as they may have a significant flaw.
Related papers
- Privacy-preserving Universal Adversarial Defense for Black-box Models [20.968518031455503]
We introduce DUCD, a universal black-box defense method that does not require access to the target model's parameters or architecture.
Our approach involves querying the target model by querying it with data, creating a white-box surrogate while preserving data privacy.
Experiments on multiple image classification datasets show that DUCD not only outperforms existing black-box defenses but also matches the accuracy of white-box defenses.
arXiv Detail & Related papers (2024-08-20T08:40:39Z) - Hindering Adversarial Attacks with Multiple Encrypted Patch Embeddings [13.604830818397629]
We propose a new key-based defense focusing on both efficiency and robustness.
We build upon the previous defense with two major improvements: (1) efficient training and (2) optional randomization.
Experiments were carried out on the ImageNet dataset, and the proposed defense was evaluated against an arsenal of state-of-the-art attacks.
arXiv Detail & Related papers (2023-09-04T14:08:34Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Randomness in ML Defenses Helps Persistent Attackers and Hinders
Evaluators [49.52538232104449]
It is becoming increasingly imperative to design robust ML defenses.
Recent work has found that many defenses that initially resist state-of-the-art attacks can be broken by an adaptive adversary.
We take steps to simplify the design of defenses and argue that white-box defenses should eschew randomness when possible.
arXiv Detail & Related papers (2023-02-27T01:33:31Z) - Scale-Invariant Adversarial Attack for Evaluating and Enhancing
Adversarial Defenses [22.531976474053057]
Projected Gradient Descent (PGD) attack has been demonstrated to be one of the most successful adversarial attacks.
We propose Scale-Invariant Adversarial Attack (SI-PGD), which utilizes the angle between the features in the penultimate layer and the weights in the softmax layer to guide the generation of adversaries.
arXiv Detail & Related papers (2022-01-29T08:40:53Z) - Searching for an Effective Defender: Benchmarking Defense against
Adversarial Word Substitution [83.84968082791444]
Deep neural networks are vulnerable to intentionally crafted adversarial examples.
Various methods have been proposed to defend against adversarial word-substitution attacks for neural NLP models.
arXiv Detail & Related papers (2021-08-29T08:11:36Z) - Theoretical Study of Random Noise Defense against Query-Based Black-Box
Attacks [72.8152874114382]
In this work, we study a simple but promising defense technique, dubbed Random Noise Defense (RND) against query-based black-box attacks.
It is lightweight and can be directly combined with any off-the-shelf models and other defense strategies.
In this work, we present solid theoretical analyses to demonstrate that the defense effect of RND against the query-based black-box attack and the corresponding adaptive attack heavily depends on the magnitude ratio.
arXiv Detail & Related papers (2021-04-23T08:39:41Z) - Attack Agnostic Adversarial Defense via Visual Imperceptible Bound [70.72413095698961]
This research aims to design a defense model that is robust within a certain bound against both seen and unseen adversarial attacks.
The proposed defense model is evaluated on the MNIST, CIFAR-10, and Tiny ImageNet databases.
The proposed algorithm is attack agnostic, i.e. it does not require any knowledge of the attack algorithm.
arXiv Detail & Related papers (2020-10-25T23:14:26Z) - Defense against adversarial attacks on spoofing countermeasures of ASV [95.87555881176529]
This paper introduces a passive defense method, spatial smoothing, and a proactive defense method, adversarial training, to mitigate the vulnerability of ASV spoofing countermeasure models.
The experimental results show that these two defense methods positively help spoofing countermeasure models counter adversarial examples.
arXiv Detail & Related papers (2020-03-06T08:08:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.