Effectiveness of Moving Target Defenses for Adversarial Attacks in
ML-based Malware Detection
- URL: http://arxiv.org/abs/2302.00537v1
- Date: Wed, 1 Feb 2023 16:03:34 GMT
- Title: Effectiveness of Moving Target Defenses for Adversarial Attacks in
ML-based Malware Detection
- Authors: Aqib Rashid, Jose Such
- Abstract summary: Moving target defenses (MTDs) to counter adversarial ML attacks have been proposed in recent years.
We study for the first time the effectiveness of several recent MTDs for adversarial ML attacks applied to the malware detection domain.
We show that transferability and query attack strategies can achieve high levels of evasion against these defenses.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Several moving target defenses (MTDs) to counter adversarial ML attacks have
been proposed in recent years. MTDs claim to increase the difficulty for the
attacker in conducting attacks by regularly changing certain elements of the
defense, such as cycling through configurations. To examine these claims, we
study for the first time the effectiveness of several recent MTDs for
adversarial ML attacks applied to the malware detection domain. Under different
threat models, we show that transferability and query attack strategies can
achieve high levels of evasion against these defenses through existing and
novel attack strategies across Android and Windows. We also show that
fingerprinting and reconnaissance are possible and demonstrate how attackers
may obtain critical defense hyperparameters as well as information about how
predictions are produced. Based on our findings, we present key recommendations
for future work on the development of effective MTDs for adversarial attacks in
ML-based malware detection.
Related papers
- Defense Against Prompt Injection Attack by Leveraging Attack Techniques [66.65466992544728]
Large language models (LLMs) have achieved remarkable performance across various natural language processing (NLP) tasks.
As LLMs continue to evolve, new vulnerabilities, especially prompt injection attacks arise.
Recent attack methods leverage LLMs' instruction-following abilities and their inabilities to distinguish instructions injected in the data content.
arXiv Detail & Related papers (2024-11-01T09:14:21Z) - SEEP: Training Dynamics Grounds Latent Representation Search for Mitigating Backdoor Poisoning Attacks [53.28390057407576]
Modern NLP models are often trained on public datasets drawn from diverse sources.
Data poisoning attacks can manipulate the model's behavior in ways engineered by the attacker.
Several strategies have been proposed to mitigate the risks associated with backdoor attacks.
arXiv Detail & Related papers (2024-05-19T14:50:09Z) - Mitigating Label Flipping Attacks in Malicious URL Detectors Using
Ensemble Trees [16.16333915007336]
Malicious URLs provide adversarial opportunities across various industries, including transportation, healthcare, energy, and banking.
backdoor attacks involve manipulating a small percentage of training data labels, such as Label Flipping (LF), which changes benign labels to malicious ones and vice versa.
We propose an innovative alarm system that detects the presence of poisoned labels and a defense mechanism designed to uncover the original class labels.
arXiv Detail & Related papers (2024-03-05T14:21:57Z) - A Comprehensive Survey of Attack Techniques, Implementation, and Mitigation Strategies in Large Language Models [0.0]
This article explores two attack categories: attacks on models themselves and attacks on model applications.
The former requires expertise, access to model data, and significant implementation time.
The latter is more accessible to attackers and has seen increased attention.
arXiv Detail & Related papers (2023-12-18T07:07:32Z) - BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
Learning [85.2564206440109]
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses.
We introduce the emphtoolns attack, which is resistant to backdoor detection and model fine-tuning defenses.
arXiv Detail & Related papers (2023-11-20T02:21:49Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - MalProtect: Stateful Defense Against Adversarial Query Attacks in
ML-based Malware Detection [0.0]
MalProtect is a stateful defense against query attacks in the malware detection domain.
Our results show that it reduces the evasion rate of adversarial query attacks by 80+% in Android and Windows malware.
arXiv Detail & Related papers (2023-02-21T15:40:19Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - StratDef: Strategic Defense Against Adversarial Attacks in ML-based
Malware Detection [0.0]
StratDef is a strategic defense system based on a moving target defense approach.
We show that StratDef performs better than other defenses even when facing the peak adversarial threat.
arXiv Detail & Related papers (2022-02-15T16:51:53Z) - Guided Adversarial Attack for Evaluating and Enhancing Adversarial
Defenses [59.58128343334556]
We introduce a relaxation term to the standard loss, that finds more suitable gradient-directions, increases attack efficacy and leads to more efficient adversarial training.
We propose Guided Adversarial Margin Attack (GAMA), which utilizes function mapping of the clean image to guide the generation of adversaries.
We also propose Guided Adversarial Training (GAT), which achieves state-of-the-art performance amongst single-step defenses.
arXiv Detail & Related papers (2020-11-30T16:39:39Z) - Arms Race in Adversarial Malware Detection: A Survey [33.8941961394801]
Malicious software (malware) is a major cyber threat that has to be tackled with Machine Learning (ML) techniques.
ML is vulnerable to attacks known as adversarial examples.
Knowing the defender's feature set is critical to the success of transfer attacks.
The effectiveness of adversarial training depends on the defender's capability in identifying the most powerful attack.
arXiv Detail & Related papers (2020-05-24T07:20:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.