DAD++: Improved Data-free Test Time Adversarial Defense
- URL: http://arxiv.org/abs/2309.05132v1
- Date: Sun, 10 Sep 2023 20:39:53 GMT
- Title: DAD++: Improved Data-free Test Time Adversarial Defense
- Authors: Gaurav Kumar Nayak, Inder Khatri, Shubham Randive, Ruchit Rawal,
Anirban Chakraborty
- Abstract summary: We propose a test time Data-free Adversarial Defense (DAD) containing detection and correction frameworks.
We conduct a wide range of experiments and ablations on several datasets and network architectures to show the efficacy of our proposed approach.
Our DAD++ gives an impressive performance against various adversarial attacks with a minimal drop in clean accuracy.
- Score: 12.606555446261668
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the increasing deployment of deep neural networks in safety-critical
applications such as self-driving cars, medical imaging, anomaly detection,
etc., adversarial robustness has become a crucial concern in the reliability of
these networks in real-world scenarios. A plethora of works based on
adversarial training and regularization-based techniques have been proposed to
make these deep networks robust against adversarial attacks. However, these
methods require either retraining models or training them from scratch, making
them infeasible to defend pre-trained models when access to training data is
restricted. To address this problem, we propose a test time Data-free
Adversarial Defense (DAD) containing detection and correction frameworks.
Moreover, to further improve the efficacy of the correction framework in cases
when the detector is under-confident, we propose a soft-detection scheme
(dubbed as "DAD++"). We conduct a wide range of experiments and ablations on
several datasets and network architectures to show the efficacy of our proposed
approach. Furthermore, we demonstrate the applicability of our approach in
imparting adversarial defense at test time under data-free (or data-efficient)
applications/setups, such as Data-free Knowledge Distillation and Source-free
Unsupervised Domain Adaptation, as well as Semi-supervised classification
frameworks. We observe that in all the experiments and applications, our DAD++
gives an impressive performance against various adversarial attacks with a
minimal drop in clean accuracy. The source code is available at:
https://github.com/vcl-iisc/Improved-Data-free-Test-Time-Adversarial-Defense
Related papers
- Adversarial Robustification via Text-to-Image Diffusion Models [56.37291240867549]
Adrial robustness has been conventionally believed as a challenging property to encode for neural networks.
We develop a scalable and model-agnostic solution to achieve adversarial robustness without using any data.
arXiv Detail & Related papers (2024-07-26T10:49:14Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - A Robust Likelihood Model for Novelty Detection [8.766411351797883]
Current approaches to novelty or anomaly detection are based on deep neural networks.
We propose a new prior that aims at learning a robust likelihood for the novelty test, as a defense against attacks.
We also integrate the same prior with a state-of-the-art novelty detection approach.
arXiv Detail & Related papers (2023-06-06T01:02:31Z) - Re-thinking Data Availablity Attacks Against Deep Neural Networks [53.64624167867274]
In this paper, we re-examine the concept of unlearnable examples and discern that the existing robust error-minimizing noise presents an inaccurate optimization objective.
We introduce a novel optimization paradigm that yields improved protection results with reduced computational time requirements.
arXiv Detail & Related papers (2023-05-18T04:03:51Z) - Distributed Adversarial Training to Robustify Deep Neural Networks at
Scale [100.19539096465101]
Current deep neural networks (DNNs) are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification.
To defend against such attacks, an effective approach, known as adversarial training (AT), has been shown to mitigate robust training.
We propose a large-batch adversarial training framework implemented over multiple machines.
arXiv Detail & Related papers (2022-06-13T15:39:43Z) - DAD: Data-free Adversarial Defense at Test Time [21.741026088202126]
Deep models are highly susceptible to adversarial attacks.
Privacy has become an important concern, restricting access to only trained models but not the training data.
We propose a completely novel problem of 'test-time adversarial defense in absence of training data and even their statistics'
arXiv Detail & Related papers (2022-04-04T15:16:13Z) - Towards Adversarial-Resilient Deep Neural Networks for False Data
Injection Attack Detection in Power Grids [7.351477761427584]
False data injection attacks (FDIAs) pose a significant security threat to power system state estimation.
Recent studies have proposed machine learning (ML) techniques, particularly deep neural networks (DNNs)
arXiv Detail & Related papers (2021-02-17T22:26:34Z) - Mitigating the Impact of Adversarial Attacks in Very Deep Networks [10.555822166916705]
Deep Neural Network (DNN) models have vulnerabilities related to security concerns.
Data poisoning-enabled perturbation attacks are complex adversarial ones that inject false data into models.
We propose an attack-agnostic-based defense method for mitigating their influence.
arXiv Detail & Related papers (2020-12-08T21:25:44Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z) - Adversarial Self-Supervised Contrastive Learning [62.17538130778111]
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions.
We propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples.
We present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data.
arXiv Detail & Related papers (2020-06-13T08:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.