Q-TART: Quickly Training for Adversarial Robustness and
in-Transferability
- URL: http://arxiv.org/abs/2204.07024v1
- Date: Thu, 14 Apr 2022 15:23:08 GMT
- Title: Q-TART: Quickly Training for Adversarial Robustness and
in-Transferability
- Authors: Madan Ravi Ganesh, Salimeh Yasaei Sekeh, and Jason J. Corso
- Abstract summary: We propose to tackle Performance, Efficiency, and Robustness, using our proposed algorithm Q-TART.
Q-TART follows the intuition that samples highly susceptible to noise strongly affect the decision boundaries learned by deep neural networks.
We demonstrate improved performance and adversarial robustness while using only a subset of the training data.
- Score: 28.87208020322193
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Raw deep neural network (DNN) performance is not enough; in real-world
settings, computational load, training efficiency and adversarial security are
just as or even more important. We propose to simultaneously tackle
Performance, Efficiency, and Robustness, using our proposed algorithm Q-TART,
Quickly Train for Adversarial Robustness and in-Transferability. Q-TART follows
the intuition that samples highly susceptible to noise strongly affect the
decision boundaries learned by DNNs, which in turn degrades their performance
and adversarial susceptibility. By identifying and removing such samples, we
demonstrate improved performance and adversarial robustness while using only a
subset of the training data. Through our experiments we highlight Q-TART's high
performance across multiple Dataset-DNN combinations, including ImageNet, and
provide insights into the complementary behavior of Q-TART alongside existing
adversarial training approaches to increase robustness by over 1.3% while using
up to 17.9% less training time.
Related papers
- Scaling Off-Policy Reinforcement Learning with Batch and Weight Normalization [15.605124749589946]
CrossQ has demonstrated state-of-the-art sample efficiency with a low update-to-data (UTD) ratio of 1.
We identify challenges in the training dynamics, which are emphasized by higher UTD ratios.
Our proposed approach reliably scales with increasing UTD ratios, achieving competitive performance across 25 challenging continuous control tasks.
arXiv Detail & Related papers (2025-02-11T12:55:32Z) - Memory Faults in Activation-sparse Quantized Deep Neural Networks: Analysis and Mitigation using Sharpness-aware Training [0.0]
We investigate the impact of memory faults on activation-sparse quantized DNNs (AS QDNNs)
We show that a high level of activation sparsity comes at the cost of larger vulnerability to faults, with AS QDNNs exhibiting up to 11.13% lower accuracy than the standard QDNNs.
We employ sharpness-aware quantization (SAQ) training to mitigate the impact of memory faults.
arXiv Detail & Related papers (2024-06-15T06:40:48Z) - Adversarial training with informed data selection [53.19381941131439]
Adrial training is the most efficient solution to defend the network against these malicious attacks.
This work proposes a data selection strategy to be applied in the mini-batch training.
The simulation results show that a good compromise can be obtained regarding robustness and standard accuracy.
arXiv Detail & Related papers (2023-01-07T12:09:50Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Boosting Adversarial Robustness From The Perspective of Effective Margin
Regularization [58.641705224371876]
The adversarial vulnerability of deep neural networks (DNNs) has been actively investigated in the past several years.
This paper investigates the scale-variant property of cross-entropy loss, which is the most commonly used loss function in classification tasks.
We show that the proposed effective margin regularization (EMR) learns large effective margins and boosts the adversarial robustness in both standard and adversarial training.
arXiv Detail & Related papers (2022-10-11T03:16:56Z) - Adaptive Adversarial Training to Improve Adversarial Robustness of DNNs
for Medical Image Segmentation and Detection [2.2977141788872366]
It is known that Deep Neural Networks (DNNs) are vulnerable to adversarial attacks.
Standard adversarial training (SAT) method has a severe issue that limits its practical use.
We show that our AMAT method outperforms the SAT method in adversarial robustness on noisy data and prediction accuracy on clean data.
arXiv Detail & Related papers (2022-06-02T20:17:53Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - HIRE-SNN: Harnessing the Inherent Robustness of Energy-Efficient Deep
Spiking Neural Networks by Training with Crafted Input Noise [13.904091056365765]
We present an SNN training algorithm that uses crafted input noise and incurs no additional training time.
Compared to standard trained direct input SNNs, our trained models yield improved classification accuracy of up to 13.7%.
Our models also outperform inherently robust SNNs trained on rate-coded inputs with improved or similar classification performance on attack-generated images.
arXiv Detail & Related papers (2021-10-06T16:48:48Z) - A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via
Adversarial Fine-tuning [90.44219200633286]
We propose a simple yet very effective adversarial fine-tuning approach based on a $textitslow start, fast decay$ learning rate scheduling strategy.
Experimental results show that the proposed adversarial fine-tuning approach outperforms the state-of-the-art methods on CIFAR-10, CIFAR-100 and ImageNet datasets.
arXiv Detail & Related papers (2020-12-25T20:50:15Z) - GOAT: GPU Outsourcing of Deep Learning Training With Asynchronous
Probabilistic Integrity Verification Inside Trusted Execution Environment [0.0]
Machine learning models based on Deep Neural Networks (DNNs) are increasingly deployed in a range of applications ranging from self-driving cars to COVID-19 treatment discovery.
To support the computational power necessary to learn a DNN, cloud environments with dedicated hardware support have emerged as critical infrastructure.
Various approaches have been developed to address these challenges, building on trusted execution environments (TEE)
arXiv Detail & Related papers (2020-10-17T20:09:05Z) - Cross Learning in Deep Q-Networks [82.20059754270302]
We propose a novel cross Q-learning algorithm, aim at alleviating the well-known overestimation problem in value-based reinforcement learning methods.
Our algorithm builds on double Q-learning, by maintaining a set of parallel models and estimate the Q-value based on a randomly selected network.
arXiv Detail & Related papers (2020-09-29T04:58:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.