MalProtect: Stateful Defense Against Adversarial Query Attacks in
ML-based Malware Detection
- URL: http://arxiv.org/abs/2302.10739v3
- Date: Fri, 30 Jun 2023 12:24:02 GMT
- Title: MalProtect: Stateful Defense Against Adversarial Query Attacks in
ML-based Malware Detection
- Authors: Aqib Rashid and Jose Such
- Abstract summary: MalProtect is a stateful defense against query attacks in the malware detection domain.
Our results show that it reduces the evasion rate of adversarial query attacks by 80+% in Android and Windows malware.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: ML models are known to be vulnerable to adversarial query attacks. In these
attacks, queries are iteratively perturbed towards a particular class without
any knowledge of the target model besides its output. The prevalence of
remotely-hosted ML classification models and Machine-Learning-as-a-Service
platforms means that query attacks pose a real threat to the security of these
systems. To deal with this, stateful defenses have been proposed to detect
query attacks and prevent the generation of adversarial examples by monitoring
and analyzing the sequence of queries received by the system. Several stateful
defenses have been proposed in recent years. However, these defenses rely
solely on similarity or out-of-distribution detection methods that may be
effective in other domains. In the malware detection domain, the methods to
generate adversarial examples are inherently different, and therefore we find
that such detection mechanisms are significantly less effective. Hence, in this
paper, we present MalProtect, which is a stateful defense against query attacks
in the malware detection domain. MalProtect uses several threat indicators to
detect attacks. Our results show that it reduces the evasion rate of
adversarial query attacks by 80+\% in Android and Windows malware, across a
range of attacker scenarios. In the first evaluation of its kind, we show that
MalProtect outperforms prior stateful defenses, especially under the peak
adversarial threat.
Related papers
- MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt Tuning [93.77763753231338]
Adversarial Contrastive Prompt Tuning (ACPT) is proposed to fine-tune the CLIP image encoder to extract similar embeddings for any two intermediate adversarial queries.
We show that ACPT can detect 7 state-of-the-art query-based attacks with $>99%$ detection rate within 5 shots.
We also show that ACPT is robust to 3 types of adaptive attacks.
arXiv Detail & Related papers (2024-08-04T09:53:50Z) - SEEP: Training Dynamics Grounds Latent Representation Search for Mitigating Backdoor Poisoning Attacks [53.28390057407576]
Modern NLP models are often trained on public datasets drawn from diverse sources.
Data poisoning attacks can manipulate the model's behavior in ways engineered by the attacker.
Several strategies have been proposed to mitigate the risks associated with backdoor attacks.
arXiv Detail & Related papers (2024-05-19T14:50:09Z) - Improving behavior based authentication against adversarial attack using XAI [3.340314613771868]
We propose an eXplainable AI (XAI) based defense strategy against adversarial attacks in such scenarios.
A feature selector, trained with our method, can be used as a filter in front of the original authenticator.
We demonstrate that our XAI based defense strategy is effective against adversarial attacks and outperforms other defense strategies.
arXiv Detail & Related papers (2024-02-26T09:29:05Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Adv-Bot: Realistic Adversarial Botnet Attacks against Network Intrusion
Detection Systems [0.7829352305480285]
A growing number of researchers are recently investigating the feasibility of such attacks against machine learning-based security systems.
This study was to investigate the actual feasibility of adversarial attacks, specifically evasion attacks, against network-based intrusion detection systems.
Our goal is to create adversarial botnet traffic that can avoid detection while still performing all of its intended malicious functionality.
arXiv Detail & Related papers (2023-03-12T14:01:00Z) - Effectiveness of Moving Target Defenses for Adversarial Attacks in
ML-based Malware Detection [0.0]
Moving target defenses (MTDs) to counter adversarial ML attacks have been proposed in recent years.
We study for the first time the effectiveness of several recent MTDs for adversarial ML attacks applied to the malware detection domain.
We show that transferability and query attack strategies can achieve high levels of evasion against these defenses.
arXiv Detail & Related papers (2023-02-01T16:03:34Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Being Single Has Benefits. Instance Poisoning to Deceive Malware
Classifiers [47.828297621738265]
We show how an attacker can launch a sophisticated and efficient poisoning attack targeting the dataset used to train a malware classifier.
As opposed to other poisoning attacks in the malware detection domain, our attack does not focus on malware families but rather on specific malware instances that contain an implanted trigger.
We propose a comprehensive detection approach that could serve as a future sophisticated defense against this newly discovered severe threat.
arXiv Detail & Related papers (2020-10-30T15:27:44Z) - Arms Race in Adversarial Malware Detection: A Survey [33.8941961394801]
Malicious software (malware) is a major cyber threat that has to be tackled with Machine Learning (ML) techniques.
ML is vulnerable to attacks known as adversarial examples.
Knowing the defender's feature set is critical to the success of transfer attacks.
The effectiveness of adversarial training depends on the defender's capability in identifying the most powerful attack.
arXiv Detail & Related papers (2020-05-24T07:20:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.