Progressive Defense Against Adversarial Attacks for Deep Learning as a
Service in Internet of Things
- URL: http://arxiv.org/abs/2010.11143v1
- Date: Thu, 15 Oct 2020 06:40:53 GMT
- Title: Progressive Defense Against Adversarial Attacks for Deep Learning as a
Service in Internet of Things
- Authors: Ling Wang, Cheng Zhang, Zejian Luo, Chenguang Liu, Jie Liu, Xi Zheng,
and Athanasios Vasilakos
- Abstract summary: Some Deep Neural Networks (DNN) can be easily misled by adding relatively small but adversarial perturbations to the input.
We present a defense strategy called a progressive defense against adversarial attacks (PDAAA) for efficiently and effectively filtering out the adversarial pixel mutations.
The result shows it outperforms the state-of-the-art while reducing the cost of model training by 50% on average.
- Score: 9.753864027359521
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nowadays, Deep Learning as a service can be deployed in Internet of Things
(IoT) to provide smart services and sensor data processing. However, recent
research has revealed that some Deep Neural Networks (DNN) can be easily misled
by adding relatively small but adversarial perturbations to the input (e.g.,
pixel mutation in input images). One challenge in defending DNN against these
attacks is to efficiently identifying and filtering out the adversarial pixels.
The state-of-the-art defense strategies with good robustness often require
additional model training for specific attacks. To reduce the computational
cost without loss of generality, we present a defense strategy called a
progressive defense against adversarial attacks (PDAAA) for efficiently and
effectively filtering out the adversarial pixel mutations, which could mislead
the neural network towards erroneous outputs, without a-priori knowledge about
the attack type. We evaluated our progressive defense strategy against various
attack methods on two well-known datasets. The result shows it outperforms the
state-of-the-art while reducing the cost of model training by 50% on average.
Related papers
- GenFighter: A Generative and Evolutive Textual Attack Removal [6.044610337297754]
Adrial attacks pose significant challenges to deep neural networks (DNNs) such as Transformer models in natural language processing (NLP)
This paper introduces a novel defense strategy, called GenFighter, which enhances adversarial robustness by learning and reasoning on the training classification distribution.
We show that GenFighter outperforms state-of-the-art defenses in accuracy under attack and attack success rate metrics.
arXiv Detail & Related papers (2024-04-17T16:32:13Z) - Efficient Defense Against Model Stealing Attacks on Convolutional Neural
Networks [0.548924822963045]
Model stealing attacks can lead to intellectual property theft and other security and privacy risks.
Current state-of-the-art defenses against model stealing attacks suggest adding perturbations to the prediction probabilities.
We propose a simple yet effective and efficient defense alternative.
arXiv Detail & Related papers (2023-09-04T22:25:49Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - Learning the Unlearnable: Adversarial Augmentations Suppress Unlearnable
Example Attacks [21.633448874100004]
Unlearnable example attacks can be used to safeguard public data against unauthorized use for training deep learning models.
We introduce the UEraser method, which outperforms current defenses against unlearnable example attacks.
Our code is open source and available to the deep learning community.
arXiv Detail & Related papers (2023-03-27T12:00:54Z) - SPIN: Simulated Poisoning and Inversion Network for Federated
Learning-Based 6G Vehicular Networks [9.494669823390648]
Vehicular networks have always faced data privacy preservation concerns.
The technique is quite vulnerable to model inversion and model poisoning attacks.
We propose simulated poisoning and inversion network (SPIN) that leverages the optimization approach for reconstructing data.
arXiv Detail & Related papers (2022-11-21T10:07:13Z) - Projective Ranking-based GNN Evasion Attacks [52.85890533994233]
Graph neural networks (GNNs) offer promising learning methods for graph-related tasks.
GNNs are at risk of adversarial attacks.
arXiv Detail & Related papers (2022-02-25T21:52:09Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Online Alternate Generator against Adversarial Attacks [144.45529828523408]
Deep learning models are notoriously sensitive to adversarial examples which are synthesized by adding quasi-perceptible noises on real images.
We propose a portable defense method, online alternate generator, which does not need to access or modify the parameters of the target networks.
The proposed method works by online synthesizing another image from scratch for an input image, instead of removing or destroying adversarial noises.
arXiv Detail & Related papers (2020-09-17T07:11:16Z) - Evaluating a Simple Retraining Strategy as a Defense Against Adversarial
Attacks [17.709146615433458]
We show how simple algorithms like KNN can be used to determine the labels of the adversarial images needed for retraining.
We present the results on two standard datasets namely, CIFAR-10 and TinyImageNet.
arXiv Detail & Related papers (2020-07-20T07:49:33Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.