Improving Deep Learning Model Robustness Against Adversarial Attack by
Increasing the Network Capacity
- URL: http://arxiv.org/abs/2204.11357v1
- Date: Sun, 24 Apr 2022 21:04:17 GMT
- Title: Improving Deep Learning Model Robustness Against Adversarial Attack by
Increasing the Network Capacity
- Authors: Marco Marchetti and Edmond S. L. Ho
- Abstract summary: This paper explores the security issues in Deep Learning and analyses, through the use of experiments, the way forward to build more resilient models.
Experiments are conducted to identify the strengths and weaknesses of a new approach to improve the robustness of DL models against adversarial attacks.
- Score: 4.605037293860087
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Nowadays, we are more and more reliant on Deep Learning (DL) models and thus
it is essential to safeguard the security of these systems. This paper explores
the security issues in Deep Learning and analyses, through the use of
experiments, the way forward to build more resilient models. Experiments are
conducted to identify the strengths and weaknesses of a new approach to improve
the robustness of DL models against adversarial attacks. The results show
improvements and new ideas that can be used as recommendations for researchers
and practitioners to create increasingly better DL algorithms.
Related papers
- Impact of Architectural Modifications on Deep Learning Adversarial Robustness [16.991522358940774]
We present an experimental evaluation of the effects of model modifications on deep learning model robustness using adversarial attacks.
Our results indicate the pressing demand for an in-depth assessment of the effects of model changes on the robustness of models.
arXiv Detail & Related papers (2024-05-03T08:58:38Z) - Adversarial Robustness of Distilled and Pruned Deep Learning-based Wireless Classifiers [0.8348593305367524]
Deep learning techniques for automatic modulation classification (AMC) of wireless signals are vulnerable to adversarial attacks.
This poses a severe security threat to the DL-based wireless systems, specifically for edge applications of AMC.
We address the joint problem of developing optimized DL models that are also robust against adversarial attacks.
arXiv Detail & Related papers (2024-04-11T06:15:01Z) - PGN: A perturbation generation network against deep reinforcement
learning [8.546103661706391]
We propose a novel generative model for creating effective adversarial examples to attack the agent.
Considering the specificity of deep reinforcement learning, we propose the action consistency ratio as a measure of stealthiness.
arXiv Detail & Related papers (2023-12-20T10:40:41Z) - DST: Dynamic Substitute Training for Data-free Black-box Attack [79.61601742693713]
We propose a novel dynamic substitute training attack method to encourage substitute model to learn better and faster from the target model.
We introduce a task-driven graph-based structure information learning constrain to improve the quality of generated training data.
arXiv Detail & Related papers (2022-04-03T02:29:11Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Holistic Adversarial Robustness of Deep Learning Models [91.34155889052786]
Adversarial robustness studies the worst-case performance of a machine learning model to ensure safety and reliability.
This paper provides a comprehensive overview of research topics and foundational principles of research methods for adversarial robustness of deep learning models.
arXiv Detail & Related papers (2022-02-15T05:30:27Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Adversarial Robustness of Deep Learning: Theory, Algorithms, and
Applications [27.033174829788404]
This tutorial aims to introduce the fundamentals of adversarial robustness of deep learning.
We will highlight state-of-the-art techniques in adversarial attacks and robustness verification of deep neural networks (DNNs)
We will also introduce some effective countermeasures to improve the robustness of deep learning models.
arXiv Detail & Related papers (2021-08-24T00:08:33Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Improved Adversarial Training via Learned Optimizer [101.38877975769198]
We propose a framework to improve the robustness of adversarial training models.
By co-training's parameters model's weights, the proposed framework consistently improves robustness and steps adaptively for update directions.
arXiv Detail & Related papers (2020-04-25T20:15:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.