Feature-level Malware Obfuscation in Deep Learning
- URL: http://arxiv.org/abs/2002.05517v1
- Date: Mon, 10 Feb 2020 00:47:23 GMT
- Title: Feature-level Malware Obfuscation in Deep Learning
- Authors: Keith Dillon
- Abstract summary: We train a deep neural network classifier for malware classification using features of benign and malware samples.
We demonstrate a steep increase in false negative rate (i.e., attacks succeed) by randomly adding features of a benign app to malware.
We find that for API calls, it is possible to reject the vast majority of attacks, where using Intents or Permissions is less successful.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of detecting malware with deep learning models, where
the malware may be combined with significant amounts of benign code. Examples
of this include piggybacking and trojan horse attacks on a system, where
malicious behavior is hidden within a useful application. Such added
flexibility in augmenting the malware enables significantly more code
obfuscation. Hence we focus on the use of static features, particularly
Intents, Permissions, and API calls, which we presume cannot be ultimately
hidden from the Android system, but only augmented with yet more such features.
We first train a deep neural network classifier for malware classification
using features of benign and malware samples. Then we demonstrate a steep
increase in false negative rate (i.e., attacks succeed), simply by randomly
adding features of a benign app to malware. Finally we test the use of data
augmentation to harden the classifier against such attacks. We find that for
API calls, it is possible to reject the vast majority of attacks, where using
Intents or Permissions is less successful.
Related papers
- Living off the Analyst: Harvesting Features from Yara Rules for Malware Detection [50.55317257140427]
A strategy used by malicious actors is to "live off the land," where benign systems are used and repurposed for the malicious actor's intent.
We show that this is plausible via YARA rules, which use human-written signatures to detect specific malware families.
By extracting sub-signatures from publicly available YARA rules, we assembled a set of features that can more effectively discriminate malicious samples.
arXiv Detail & Related papers (2024-11-27T17:03:00Z) - Relation-aware based Siamese Denoising Autoencoder for Malware Few-shot Classification [6.7203034724385935]
When malware employs an unseen zero-day exploit, traditional security measures can fail to detect them.
Existing machine learning methods, which are trained on specific and occasionally outdated malware samples, may struggle to adapt to features in new malware.
We propose a novel Siamese Neural Network (SNN) that uses relation-aware embeddings to calculate more accurate similarity probabilities.
arXiv Detail & Related papers (2024-11-21T11:29:10Z) - MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - Obfuscated Memory Malware Detection [2.0618817976970103]
We show how Artificial Intelligence and Machine learning can be used to detect and mitigate these cyber-attacks induced by malware in specific obfuscated malware.
We propose a multi-class classification model to detect the three types of obfuscated malware with an accuracy of 89.07% using the Classic Random Forest algorithm.
arXiv Detail & Related papers (2024-08-23T06:39:15Z) - Does Few-shot Learning Suffer from Backdoor Attacks? [63.9864247424967]
We show that few-shot learning can still be vulnerable to backdoor attacks.
Our method demonstrates a high Attack Success Rate (ASR) in FSL tasks with different few-shot learning paradigms.
This study reveals that few-shot learning still suffers from backdoor attacks, and its security should be given attention.
arXiv Detail & Related papers (2023-12-31T06:43:36Z) - Can you See me? On the Visibility of NOPs against Android Malware Detectors [1.2187048691454239]
This paper proposes a visibility metric that assesses the difficulty in spotting NOPs and similar non-operational codes.
We tested our metric on a state-of-the-art, opcode-based deep learning system for Android malware detection.
arXiv Detail & Related papers (2023-12-28T20:48:16Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Mate! Are You Really Aware? An Explainability-Guided Testing Framework
for Robustness of Malware Detectors [49.34155921877441]
We propose an explainability-guided and model-agnostic testing framework for robustness of malware detectors.
We then use this framework to test several state-of-the-art malware detectors' abilities to detect manipulated malware.
Our findings shed light on the limitations of current malware detectors, as well as how they can be improved.
arXiv Detail & Related papers (2021-11-19T08:02:38Z) - Evading Malware Classifiers via Monte Carlo Mutant Feature Discovery [23.294653273180472]
We show how a malicious actor trains a surrogate model to discover binary mutations that cause an instance to be misclassified.
Then, mutated malware is sent to the victim model that takes the place of an antivirus API to test whether it can evade detection.
arXiv Detail & Related papers (2021-06-15T03:31:02Z) - Being Single Has Benefits. Instance Poisoning to Deceive Malware
Classifiers [47.828297621738265]
We show how an attacker can launch a sophisticated and efficient poisoning attack targeting the dataset used to train a malware classifier.
As opposed to other poisoning attacks in the malware detection domain, our attack does not focus on malware families but rather on specific malware instances that contain an implanted trigger.
We propose a comprehensive detection approach that could serve as a future sophisticated defense against this newly discovered severe threat.
arXiv Detail & Related papers (2020-10-30T15:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.