MalPurifier: Enhancing Android Malware Detection with Adversarial
Purification against Evasion Attacks
- URL: http://arxiv.org/abs/2312.06423v1
- Date: Mon, 11 Dec 2023 14:48:43 GMT
- Title: MalPurifier: Enhancing Android Malware Detection with Adversarial
Purification against Evasion Attacks
- Authors: Yuyang Zhou, Guang Cheng, Zongyao Chen, Shui Yu
- Abstract summary: MalPurifier exploits adversarial purification to eliminate perturbations independently, resulting in attack mitigation in a light and flexible way.
Experimental results on two Android malware datasets demonstrate that MalPurifier outperforms the state-of-the-art defenses.
- Score: 19.68134775248897
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning (ML) has gained significant adoption in Android malware
detection to address the escalating threats posed by the rapid proliferation of
malware attacks. However, recent studies have revealed the inherent
vulnerabilities of ML-based detection systems to evasion attacks. While efforts
have been made to address this critical issue, many of the existing defensive
methods encounter challenges such as lower effectiveness or reduced
generalization capabilities. In this paper, we introduce a novel Android
malware detection method, MalPurifier, which exploits adversarial purification
to eliminate perturbations independently, resulting in attack mitigation in a
light and flexible way. Specifically, MalPurifier employs a Denoising
AutoEncoder (DAE)-based purification model to preprocess input samples,
removing potential perturbations from them and then leading to correct
classification. To enhance defense effectiveness, we propose a diversified
adversarial perturbation mechanism that strengthens the purification model
against different manipulations from various evasion attacks. We also
incorporate randomized "protective noises" onto benign samples to prevent
excessive purification. Furthermore, we customize a loss function for improving
the DAE model, combining reconstruction loss and prediction loss, to enhance
feature representation learning, resulting in accurate reconstruction and
classification. Experimental results on two Android malware datasets
demonstrate that MalPurifier outperforms the state-of-the-art defenses, and it
significantly strengthens the vulnerable malware detector against 37 evasion
attacks, achieving accuracies over 90.91%. Notably, MalPurifier demonstrates
easy scalability to other detectors, offering flexibility and robustness in its
implementation.
Related papers
- MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats [52.94388672185062]
We propose an efficient defense mechanism against backdoor threats using a concept known as machine unlearning.
This entails strategically creating a small set of poisoned samples to aid the model's rapid unlearning of backdoor vulnerabilities.
In the backdoor unlearning process, we present a novel token-based portion unlearning training regime.
arXiv Detail & Related papers (2024-09-29T02:55:38Z) - Improving Adversarial Robustness in Android Malware Detection by Reducing the Impact of Spurious Correlations [3.7937308360299116]
Machine learning (ML) has demonstrated significant advancements in Android malware detection (AMD)
However, the resilience of ML against realistic evasion attacks remains a major obstacle for AMD.
In this study, we propose a domain adaptation technique to improve the generalizability of AMD by aligning the distribution of malware samples and AEs.
arXiv Detail & Related papers (2024-08-27T17:01:12Z) - Watch the Watcher! Backdoor Attacks on Security-Enhancing Diffusion Models [65.30406788716104]
This work investigates the vulnerabilities of security-enhancing diffusion models.
We demonstrate that these models are highly susceptible to DIFF2, a simple yet effective backdoor attack.
Case studies show that DIFF2 can significantly reduce both post-purification and certified accuracy across benchmark datasets and models.
arXiv Detail & Related papers (2024-06-14T02:39:43Z) - Bayesian Learned Models Can Detect Adversarial Malware For Free [28.498994871579985]
Adversarial training is an effective method but is computationally expensive to scale up to large datasets.
In particular, a Bayesian formulation can capture the model parameters' distribution and quantify uncertainty without sacrificing model performance.
We found, quantifying uncertainty through Bayesian learning methods can defend against adversarial malware.
arXiv Detail & Related papers (2024-03-27T07:16:48Z) - Exploring Model Dynamics for Accumulative Poisoning Discovery [62.08553134316483]
We propose a novel information measure, namely, Memorization Discrepancy, to explore the defense via the model-level information.
By implicitly transferring the changes in the data manipulation to that in the model outputs, Memorization Discrepancy can discover the imperceptible poison samples.
We thoroughly explore its properties and propose Discrepancy-aware Sample Correction (DSC) to defend against accumulative poisoning attacks.
arXiv Detail & Related papers (2023-06-06T14:45:24Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - PAD: Towards Principled Adversarial Malware Detection Against Evasion
Attacks [17.783849474913726]
We propose a new adversarial training framework, termed Principled Adversarial Malware Detection (PAD)
PAD lays on a learnable convex measurement that quantifies distribution-wise discrete perturbations to protect malware detectors from adversaries.
PAD can harden ML-based malware detection against 27 evasion attacks with detection accuracies greater than 83.45%.
It matches or outperforms many anti-malware scanners in VirusTotal against realistic adversarial malware.
arXiv Detail & Related papers (2023-02-22T12:24:49Z) - Being Single Has Benefits. Instance Poisoning to Deceive Malware
Classifiers [47.828297621738265]
We show how an attacker can launch a sophisticated and efficient poisoning attack targeting the dataset used to train a malware classifier.
As opposed to other poisoning attacks in the malware detection domain, our attack does not focus on malware families but rather on specific malware instances that contain an implanted trigger.
We propose a comprehensive detection approach that could serve as a future sophisticated defense against this newly discovered severe threat.
arXiv Detail & Related papers (2020-10-30T15:27:44Z) - MDEA: Malware Detection with Evolutionary Adversarial Learning [16.8615211682877]
MDEA, an Adversarial Malware Detection model uses evolutionary optimization to create attack samples to make the network robust against evasion attacks.
By retraining the model with the evolved malware samples, its performance improves a significant margin.
arXiv Detail & Related papers (2020-02-09T09:59:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.