Reverse engineering adversarial attacks with fingerprints from
adversarial examples
- URL: http://arxiv.org/abs/2301.13869v2
- Date: Wed, 1 Feb 2023 16:34:52 GMT
- Title: Reverse engineering adversarial attacks with fingerprints from
adversarial examples
- Authors: David Aaron Nicholson, Vincent Emanuele
- Abstract summary: Adversarial examples are typically generated by an attack algorithm that optimize a perturbation added to a benign input.
We take a "fight fire with fire" approach, training deep neural networks to classify these perturbations.
We achieve an accuracy of 99.4% with a ResNet50 model trained on the perturbations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In spite of intense research efforts, deep neural networks remain vulnerable
to adversarial examples: an input that forces the network to confidently
produce incorrect outputs. Adversarial examples are typically generated by an
attack algorithm that optimizes a perturbation added to a benign input. Many
such algorithms have been developed. If it were possible to reverse engineer
attack algorithms from adversarial examples, this could deter bad actors
because of the possibility of attribution. Here we formulate reverse
engineering as a supervised learning problem where the goal is to assign an
adversarial example to a class that represents the algorithm and parameters
used. To our knowledge it has not been previously shown whether this is even
possible. We first test whether we can classify the perturbations added to
images by attacks on undefended single-label image classification models.
Taking a "fight fire with fire" approach, we leverage the sensitivity of deep
neural networks to adversarial examples, training them to classify these
perturbations. On a 17-class dataset (5 attacks, 4 bounded with 4 epsilon
values each), we achieve an accuracy of 99.4% with a ResNet50 model trained on
the perturbations. We then ask whether we can perform this task without access
to the perturbations, obtaining an estimate of them with signal processing
algorithms, an approach we call "fingerprinting". We find the JPEG algorithm
serves as a simple yet effective fingerprinter (85.05% accuracy), providing a
strong baseline for future work. We discuss how our approach can be extended to
attack agnostic, learnable fingerprints, and to open-world scenarios with
unknown attacks.
Related papers
- Wasserstein distributional robustness of neural networks [9.79503506460041]
Deep neural networks are known to be vulnerable to adversarial attacks (AA)
For an image recognition task, this means that a small perturbation of the original can result in the image being misclassified.
We re-cast the problem using techniques of Wasserstein distributionally robust optimization (DRO) and obtain novel contributions.
arXiv Detail & Related papers (2023-06-16T13:41:24Z) - SAIF: Sparse Adversarial and Imperceptible Attack Framework [7.025774823899217]
We propose a novel attack technique called Sparse Adversarial and Interpretable Attack Framework (SAIF)
Specifically, we design imperceptible attacks that contain low-magnitude perturbations at a small number of pixels and leverage these sparse attacks to reveal the vulnerability of classifiers.
SAIF computes highly imperceptible and interpretable adversarial examples, and outperforms state-of-the-art sparse attack methods on the ImageNet dataset.
arXiv Detail & Related papers (2022-12-14T20:28:50Z) - Identification of Attack-Specific Signatures in Adversarial Examples [62.17639067715379]
We show that different attack algorithms produce adversarial examples which are distinct not only in their effectiveness but also in how they qualitatively affect their victims.
Our findings suggest that prospective adversarial attacks should be compared not only via their success rates at fooling models but also via deeper downstream effects they have on victims.
arXiv Detail & Related papers (2021-10-13T15:40:48Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Improving Transformation-based Defenses against Adversarial Examples
with First-order Perturbations [16.346349209014182]
Studies show that neural networks are susceptible to adversarial attacks.
This exposes a potential threat to neural network-based intelligent systems.
We propose a method for counteracting adversarial perturbations to improve adversarial robustness.
arXiv Detail & Related papers (2021-03-08T06:27:24Z) - Hidden Backdoor Attack against Semantic Segmentation Models [60.0327238844584]
The emphbackdoor attack intends to embed hidden backdoors in deep neural networks (DNNs) by poisoning training data.
We propose a novel attack paradigm, the emphfine-grained attack, where we treat the target label from the object-level instead of the image-level.
Experiments show that the proposed methods can successfully attack semantic segmentation models by poisoning only a small proportion of training data.
arXiv Detail & Related papers (2021-03-06T05:50:29Z) - An Empirical Review of Adversarial Defenses [0.913755431537592]
Deep neural networks, which form the basis of such systems, are highly susceptible to a specific type of attack, called adversarial attacks.
A hacker can, even with bare minimum computation, generate adversarial examples (images or data points that belong to another class, but consistently fool the model to get misclassified as genuine) and crumble the basis of such algorithms.
We show two effective techniques, namely Dropout and Denoising Autoencoders, and show their success in preventing such attacks from fooling the model.
arXiv Detail & Related papers (2020-12-10T09:34:41Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z) - Online Alternate Generator against Adversarial Attacks [144.45529828523408]
Deep learning models are notoriously sensitive to adversarial examples which are synthesized by adding quasi-perceptible noises on real images.
We propose a portable defense method, online alternate generator, which does not need to access or modify the parameters of the target networks.
The proposed method works by online synthesizing another image from scratch for an input image, instead of removing or destroying adversarial noises.
arXiv Detail & Related papers (2020-09-17T07:11:16Z) - Evaluating a Simple Retraining Strategy as a Defense Against Adversarial
Attacks [17.709146615433458]
We show how simple algorithms like KNN can be used to determine the labels of the adversarial images needed for retraining.
We present the results on two standard datasets namely, CIFAR-10 and TinyImageNet.
arXiv Detail & Related papers (2020-07-20T07:49:33Z) - Patch-wise Attack for Fooling Deep Neural Network [153.59832333877543]
We propose a patch-wise iterative algorithm -- a black-box attack towards mainstream normally trained and defense models.
We significantly improve the success rate by 9.2% for defense models and 3.7% for normally trained models on average.
arXiv Detail & Related papers (2020-07-14T01:50:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.