Attribution of Gradient Based Adversarial Attacks for Reverse
Engineering of Deceptions
- URL: http://arxiv.org/abs/2103.11002v1
- Date: Fri, 19 Mar 2021 19:55:00 GMT
- Title: Attribution of Gradient Based Adversarial Attacks for Reverse
Engineering of Deceptions
- Authors: Michael Goebel, Jason Bunk, Srinjoy Chattopadhyay, Lakshmanan Nataraj,
Shivkumar Chandrasekaran and B.S. Manjunath
- Abstract summary: We present two techniques that support automated identification and attribution of adversarial ML attack toolchains.
To the best of our knowledge, this is the first approach to attribute gradient based adversarial attacks and estimate their parameters.
- Score: 16.23543028393521
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine Learning (ML) algorithms are susceptible to adversarial attacks and
deception both during training and deployment. Automatic reverse engineering of
the toolchains behind these adversarial machine learning attacks will aid in
recovering the tools and processes used in these attacks. In this paper, we
present two techniques that support automated identification and attribution of
adversarial ML attack toolchains using Co-occurrence Pixel statistics and
Laplacian Residuals. Our experiments show that the proposed techniques can
identify parameters used to generate adversarial samples. To the best of our
knowledge, this is the first approach to attribute gradient based adversarial
attacks and estimate their parameters. Source code and data is available at:
https://github.com/michael-goebel/ei_red
Related papers
- DALA: A Distribution-Aware LoRA-Based Adversarial Attack against
Language Models [64.79319733514266]
Adversarial attacks can introduce subtle perturbations to input data.
Recent attack methods can achieve a relatively high attack success rate (ASR)
We propose a Distribution-Aware LoRA-based Adversarial Attack (DALA) method.
arXiv Detail & Related papers (2023-11-14T23:43:47Z) - Adv-Bot: Realistic Adversarial Botnet Attacks against Network Intrusion
Detection Systems [0.7829352305480285]
A growing number of researchers are recently investigating the feasibility of such attacks against machine learning-based security systems.
This study was to investigate the actual feasibility of adversarial attacks, specifically evasion attacks, against network-based intrusion detection systems.
Our goal is to create adversarial botnet traffic that can avoid detection while still performing all of its intended malicious functionality.
arXiv Detail & Related papers (2023-03-12T14:01:00Z) - Preprocessors Matter! Realistic Decision-Based Attacks on Machine
Learning Systems [56.64374584117259]
Decision-based attacks construct adversarial examples against a machine learning (ML) model by making only hard-label queries.
We develop techniques to (i) reverse-engineer the preprocessor and then (ii) use this extracted information to attack the end-to-end system.
Our preprocessors extraction method requires only a few hundred queries, and our preprocessor-aware attacks recover the same efficacy as when attacking the model alone.
arXiv Detail & Related papers (2022-10-07T03:10:34Z) - Versatile Weight Attack via Flipping Limited Bits [68.45224286690932]
We study a novel attack paradigm, which modifies model parameters in the deployment stage.
Considering the effectiveness and stealthiness goals, we provide a general formulation to perform the bit-flip based weight attack.
We present two cases of the general formulation with different malicious purposes, i.e., single sample attack (SSA) and triggered samples attack (TSA)
arXiv Detail & Related papers (2022-07-25T03:24:58Z) - An Adversarial Attack Analysis on Malicious Advertisement URL Detection
Framework [22.259444589459513]
Malicious advertisement URLs pose a security risk since they are the source of cyber-attacks.
Existing malicious URL detection techniques are limited and to handle unseen features as well as generalize to test data.
In this study, we extract a novel set of lexical and web-scrapped features and employ machine learning technique to set up system for fraudulent advertisement URLs detection.
arXiv Detail & Related papers (2022-04-27T20:06:22Z) - Adversarial defenses via a mixture of generators [0.0]
adversarial examples remain a relatively weakly understood feature of deep learning systems.
We show that it is possible to train such a system without supervision, simultaneously on multiple adversarial attacks.
Our system is able to recover class information for previously-unseen examples with neither attack nor data labels on the MNIST dataset.
arXiv Detail & Related papers (2021-10-05T21:27:50Z) - Zero-shot learning approach to adaptive Cybersecurity using Explainable
AI [0.5076419064097734]
We present a novel approach to handle the alarm flooding problem faced by Cybersecurity systems like security information and event management (SIEM) and intrusion detection (IDS)
We apply a zero-shot learning method to machine learning (ML) by leveraging explanations for predictions of anomalies generated by a ML model.
In this approach, without any prior knowledge of attack, we try to identify it, decipher the features that contribute to classification and try to bucketize the attack in a specific category.
arXiv Detail & Related papers (2021-06-21T06:29:13Z) - Targeted Attack against Deep Neural Networks via Flipping Limited Weight
Bits [55.740716446995805]
We study a novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes.
Our goal is to misclassify a specific sample into a target class without any sample modification.
By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem.
arXiv Detail & Related papers (2021-02-21T03:13:27Z) - An Empirical Review of Adversarial Defenses [0.913755431537592]
Deep neural networks, which form the basis of such systems, are highly susceptible to a specific type of attack, called adversarial attacks.
A hacker can, even with bare minimum computation, generate adversarial examples (images or data points that belong to another class, but consistently fool the model to get misclassified as genuine) and crumble the basis of such algorithms.
We show two effective techniques, namely Dropout and Denoising Autoencoders, and show their success in preventing such attacks from fooling the model.
arXiv Detail & Related papers (2020-12-10T09:34:41Z) - Composite Adversarial Attacks [57.293211764569996]
Adversarial attack is a technique for deceiving Machine Learning (ML) models.
In this paper, a new procedure called Composite Adrial Attack (CAA) is proposed for automatically searching the best combination of attack algorithms.
CAA beats 10 top attackers on 11 diverse defenses with less elapsed time.
arXiv Detail & Related papers (2020-12-10T03:21:16Z) - Anomaly Detection-Based Unknown Face Presentation Attack Detection [74.4918294453537]
Anomaly detection-based spoof attack detection is a recent development in face Presentation Attack Detection.
In this paper, we present a deep-learning solution for anomaly detection-based spoof attack detection.
The proposed approach benefits from the representation learning power of the CNNs and learns better features for fPAD task.
arXiv Detail & Related papers (2020-07-11T21:20:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.