On Trace of PGD-Like Adversarial Attacks
- URL: http://arxiv.org/abs/2205.09586v1
- Date: Thu, 19 May 2022 14:26:50 GMT
- Title: On Trace of PGD-Like Adversarial Attacks
- Authors: Mo Zhou, Vishal M. Patel
- Abstract summary: Adversarial attacks pose safety and security concerns for deep learning applications.
We construct Adrial Response Characteristics (ARC) features to reflect the model's gradient consistency.
Our method is intuitive, light-weighted, non-intrusive, and data-undemanding.
- Score: 77.75152218980605
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial attacks pose safety and security concerns for deep learning
applications. Yet largely imperceptible, a strong PGD-like attack may leave
strong trace in the adversarial example. Since attack triggers the local
linearity of a network, we speculate network behaves in different extents of
linearity for benign examples and adversarial examples. Thus, we construct
Adversarial Response Characteristics (ARC) features to reflect the model's
gradient consistency around the input to indicate the extent of linearity.
Under certain conditions, it shows a gradually varying pattern from benign
example to adversarial example, as the later leads to Sequel Attack Effect
(SAE). ARC feature can be used for informed attack detection (perturbation
magnitude is known) with binary classifier, or uninformed attack detection
(perturbation magnitude is unknown) with ordinal regression. Due to the
uniqueness of SAE to PGD-like attacks, ARC is also capable of inferring other
attack details such as loss function, or the ground-truth label as a
post-processing defense. Qualitative and quantitative evaluations manifest the
effectiveness of ARC feature on CIFAR-10 w/ ResNet-18 and ImageNet w/
ResNet-152 and SwinT-B-IN1K with considerable generalization among PGD-like
attacks despite domain shift. Our method is intuitive, light-weighted,
non-intrusive, and data-undemanding.
Related papers
- Universal Detection of Backdoor Attacks via Density-based Clustering and
Centroids Analysis [24.953032059932525]
We propose a Universal Defence against backdoor attacks based on Clustering and Centroids Analysis (CCA-UD)
The goal of the defence is to reveal whether a Deep Neural Network model is subject to a backdoor attack by inspecting the training dataset.
arXiv Detail & Related papers (2023-01-11T16:31:38Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - Improving Adversarial Robustness to Sensitivity and Invariance Attacks
with Deep Metric Learning [80.21709045433096]
A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample.
We use metric learning to frame adversarial regularization as an optimal transport problem.
Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense.
arXiv Detail & Related papers (2022-11-04T13:54:02Z) - Evaluation of Neural Networks Defenses and Attacks using NDCG and
Reciprocal Rank Metrics [6.6389732792316]
We present two metrics which are specifically designed to measure the effect of attacks, or the recovery effect of defenses, on the output of neural networks in classification tasks.
Inspired by the normalized discounted cumulative gain and the reciprocal rank metrics used in information retrieval literature, we treat the neural network predictions as ranked lists of results.
Compared to the common classification metrics, our proposed metrics demonstrate superior informativeness and distinctiveness.
arXiv Detail & Related papers (2022-01-10T12:54:45Z) - Using Anomaly Feature Vectors for Detecting, Classifying and Warning of
Outlier Adversarial Examples [4.096598295525345]
We present DeClaW, a system for detecting, classifying, and warning of adversarial inputs presented to a classification neural network.
Preliminary findings suggest that AFVs can help distinguish among several types of adversarial attacks with close to 93% accuracy on the CIFAR-10 dataset.
arXiv Detail & Related papers (2021-07-01T16:00:09Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - Learning and Certification under Instance-targeted Poisoning [49.55596073963654]
We study PAC learnability and certification under instance-targeted poisoning attacks.
We show that when the budget of the adversary scales sublinearly with the sample complexity, PAC learnability and certification are achievable.
We empirically study the robustness of K nearest neighbour, logistic regression, multi-layer perceptron, and convolutional neural network on real data sets.
arXiv Detail & Related papers (2021-05-18T17:48:15Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Adversarial Detection and Correction by Matching Prediction
Distributions [0.0]
The detector almost completely neutralises powerful attacks like Carlini-Wagner or SLIDE on MNIST and Fashion-MNIST.
We show that our method is still able to detect the adversarial examples in the case of a white-box attack where the attacker has full knowledge of both the model and the defence.
arXiv Detail & Related papers (2020-02-21T15:45:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.