PAD: Towards Principled Adversarial Malware Detection Against Evasion
Attacks
- URL: http://arxiv.org/abs/2302.11328v2
- Date: Thu, 6 Apr 2023 07:25:14 GMT
- Title: PAD: Towards Principled Adversarial Malware Detection Against Evasion
Attacks
- Authors: Deqiang Li, Shicheng Cui, Yun Li, Jia Xu, Fu Xiao and Shouhuai Xu
- Abstract summary: We propose a new adversarial training framework, termed Principled Adversarial Malware Detection (PAD)
PAD lays on a learnable convex measurement that quantifies distribution-wise discrete perturbations to protect malware detectors from adversaries.
PAD can harden ML-based malware detection against 27 evasion attacks with detection accuracies greater than 83.45%.
It matches or outperforms many anti-malware scanners in VirusTotal against realistic adversarial malware.
- Score: 17.783849474913726
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine Learning (ML) techniques can facilitate the automation of malicious
software (malware for short) detection, but suffer from evasion attacks. Many
studies counter such attacks in heuristic manners, lacking theoretical
guarantees and defense effectiveness. In this paper, we propose a new
adversarial training framework, termed Principled Adversarial Malware Detection
(PAD), which offers convergence guarantees for robust optimization methods. PAD
lays on a learnable convex measurement that quantifies distribution-wise
discrete perturbations to protect malware detectors from adversaries, whereby
for smooth detectors, adversarial training can be performed with theoretical
treatments. To promote defense effectiveness, we propose a new mixture of
attacks to instantiate PAD to enhance deep neural network-based measurements
and malware detectors. Experimental results on two Android malware datasets
demonstrate: (i) the proposed method significantly outperforms the
state-of-the-art defenses; (ii) it can harden ML-based malware detection
against 27 evasion attacks with detection accuracies greater than 83.45%, at
the price of suffering an accuracy decrease smaller than 2.16% in the absence
of attacks; (iii) it matches or outperforms many anti-malware scanners in
VirusTotal against realistic adversarial malware.
Related papers
- Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats [52.94388672185062]
We propose an efficient defense mechanism against backdoor threats using a concept known as machine unlearning.
This entails strategically creating a small set of poisoned samples to aid the model's rapid unlearning of backdoor vulnerabilities.
In the backdoor unlearning process, we present a novel token-based portion unlearning training regime.
arXiv Detail & Related papers (2024-09-29T02:55:38Z) - MalPurifier: Enhancing Android Malware Detection with Adversarial
Purification against Evasion Attacks [19.68134775248897]
MalPurifier exploits adversarial purification to eliminate perturbations independently, resulting in attack mitigation in a light and flexible way.
Experimental results on two Android malware datasets demonstrate that MalPurifier outperforms the state-of-the-art defenses.
arXiv Detail & Related papers (2023-12-11T14:48:43Z) - Malicious Agent Detection for Robust Multi-Agent Collaborative Perception [52.261231738242266]
Multi-agent collaborative (MAC) perception is more vulnerable to adversarial attacks than single-agent perception.
We propose Malicious Agent Detection (MADE), a reactive defense specific to MAC perception.
We conduct comprehensive evaluations on a benchmark 3D dataset V2X-sim and a real-road dataset DAIR-V2X.
arXiv Detail & Related papers (2023-10-18T11:36:42Z) - Creating Valid Adversarial Examples of Malware [4.817429789586127]
We present a generator of adversarial malware examples using reinforcement learning algorithms.
Using the PPO algorithm, we achieved an evasion rate of 53.84% against the gradient-boosted decision tree (GBDT) model.
random application of our functionality-preserving portable executable modifications successfully evades leading antivirus engines.
arXiv Detail & Related papers (2023-06-23T16:17:45Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Mate! Are You Really Aware? An Explainability-Guided Testing Framework
for Robustness of Malware Detectors [49.34155921877441]
We propose an explainability-guided and model-agnostic testing framework for robustness of malware detectors.
We then use this framework to test several state-of-the-art malware detectors' abilities to detect manipulated malware.
Our findings shed light on the limitations of current malware detectors, as well as how they can be improved.
arXiv Detail & Related papers (2021-11-19T08:02:38Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z) - Adversarial Deep Ensemble: Evasion Attacks and Defenses for Malware
Detection [8.551227913472632]
We propose a new attack approach, named mixture of attacks, to perturb a malware example without ruining its malicious functionality.
This naturally leads to a new instantiation of adversarial training, which is further geared to enhancing the ensemble of deep neural networks.
We evaluate defenses using Android malware detectors against 26 different attacks upon two practical datasets.
arXiv Detail & Related papers (2020-06-30T05:56:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.