StratDef: Strategic Defense Against Adversarial Attacks in ML-based
Malware Detection
- URL: http://arxiv.org/abs/2202.07568v6
- Date: Mon, 24 Apr 2023 15:15:20 GMT
- Title: StratDef: Strategic Defense Against Adversarial Attacks in ML-based
Malware Detection
- Authors: Aqib Rashid, Jose Such
- Abstract summary: StratDef is a strategic defense system based on a moving target defense approach.
We show that StratDef performs better than other defenses even when facing the peak adversarial threat.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over the years, most research towards defenses against adversarial attacks on
machine learning models has been in the image recognition domain. The ML-based
malware detection domain has received less attention despite its importance.
Moreover, most work exploring these defenses has focused on several methods but
with no strategy when applying them. In this paper, we introduce StratDef,
which is a strategic defense system based on a moving target defense approach.
We overcome challenges related to the systematic construction, selection, and
strategic use of models to maximize adversarial robustness. StratDef
dynamically and strategically chooses the best models to increase the
uncertainty for the attacker while minimizing critical aspects in the
adversarial ML domain, like attack transferability. We provide the first
comprehensive evaluation of defenses against adversarial attacks on machine
learning for malware detection, where our threat model explores different
levels of threat, attacker knowledge, capabilities, and attack intensities. We
show that StratDef performs better than other defenses even when facing the
peak adversarial threat. We also show that, of the existing defenses, only a
few adversarially-trained models provide substantially better protection than
just using vanilla models but are still outperformed by StratDef.
Related papers
- Versatile Defense Against Adversarial Attacks on Image Recognition [2.9980620769521513]
Defending against adversarial attacks in a real-life setting can be compared to the way antivirus software works.
It appears that a defense method based on image-to-image translation may be capable of this.
The trained model has successfully improved the classification accuracy from nearly zero to an average of 86%.
arXiv Detail & Related papers (2024-03-13T01:48:01Z) - Improving behavior based authentication against adversarial attack using XAI [3.340314613771868]
We propose an eXplainable AI (XAI) based defense strategy against adversarial attacks in such scenarios.
A feature selector, trained with our method, can be used as a filter in front of the original authenticator.
We demonstrate that our XAI based defense strategy is effective against adversarial attacks and outperforms other defense strategies.
arXiv Detail & Related papers (2024-02-26T09:29:05Z) - BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
Learning [85.2564206440109]
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses.
We introduce the emphtoolns attack, which is resistant to backdoor detection and model fine-tuning defenses.
arXiv Detail & Related papers (2023-11-20T02:21:49Z) - Baseline Defenses for Adversarial Attacks Against Aligned Language
Models [109.75753454188705]
Recent work shows that text moderations can produce jailbreaking prompts that bypass defenses.
We look at three types of defenses: detection (perplexity based), input preprocessing (paraphrase and retokenization), and adversarial training.
We find that the weakness of existing discretes for text, combined with the relatively high costs of optimization, makes standard adaptive attacks more challenging for LLMs.
arXiv Detail & Related papers (2023-09-01T17:59:44Z) - MultiRobustBench: Benchmarking Robustness Against Multiple Attacks [86.70417016955459]
We present the first unified framework for considering multiple attacks against machine learning (ML) models.
Our framework is able to model different levels of learner's knowledge about the test-time adversary.
We evaluate the performance of 16 defended models for robustness against a set of 9 different attack types.
arXiv Detail & Related papers (2023-02-21T20:26:39Z) - Effectiveness of Moving Target Defenses for Adversarial Attacks in
ML-based Malware Detection [0.0]
Moving target defenses (MTDs) to counter adversarial ML attacks have been proposed in recent years.
We study for the first time the effectiveness of several recent MTDs for adversarial ML attacks applied to the malware detection domain.
We show that transferability and query attack strategies can achieve high levels of evasion against these defenses.
arXiv Detail & Related papers (2023-02-01T16:03:34Z) - Learning Near-Optimal Intrusion Responses Against Dynamic Attackers [0.0]
We study automated intrusion response and formulate the interaction between an attacker and a defender as an optimal stopping game.
To obtain near-optimal defender strategies, we develop a fictitious self-play algorithm that learns Nashlibria through approximation.
We argue that this approach can produce effective defender strategies for a practical IT infrastructure.
arXiv Detail & Related papers (2023-01-11T16:36:24Z) - Ares: A System-Oriented Wargame Framework for Adversarial ML [3.197282271064602]
Ares is an evaluation framework for adversarial ML that allows researchers to explore attacks and defenses in a realistic wargame-like environment.
Ares frames the conflict between the attacker and defender as two agents in a reinforcement learning environment with opposing objectives.
This allows the introduction of system-level evaluation metrics such as time to failure and evaluation of complex strategies.
arXiv Detail & Related papers (2022-10-24T04:55:18Z) - LAS-AT: Adversarial Training with Learnable Attack Strategy [82.88724890186094]
"Learnable attack strategy", dubbed LAS-AT, learns to automatically produce attack strategies to improve the model robustness.
Our framework is composed of a target network that uses AEs for training to improve robustness and a strategy network that produces attack strategies to control the AE generation.
arXiv Detail & Related papers (2022-03-13T10:21:26Z) - Projective Ranking-based GNN Evasion Attacks [52.85890533994233]
Graph neural networks (GNNs) offer promising learning methods for graph-related tasks.
GNNs are at risk of adversarial attacks.
arXiv Detail & Related papers (2022-02-25T21:52:09Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.