A survey on practical adversarial examples for malware classifiers
- URL: http://arxiv.org/abs/2011.05973v1
- Date: Fri, 6 Nov 2020 17:07:34 GMT
- Title: A survey on practical adversarial examples for malware classifiers
- Authors: Daniel Park and B\"ulent Yener
- Abstract summary: Deep neural networks have been found to be vulnerable to adversarial examples.
This vulnerability can be exploited to create evasive malware samples.
We review practical attacks against malware classifiers that generate adversarial malware examples.
- Score: 0.17767466724342065
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning based solutions have been very helpful in solving problems
that deal with immense amounts of data, such as malware detection and
classification. However, deep neural networks have been found to be vulnerable
to adversarial examples, or inputs that have been purposefully perturbed to
result in an incorrect label. Researchers have shown that this vulnerability
can be exploited to create evasive malware samples. However, many proposed
attacks do not generate an executable and instead generate a feature vector. To
fully understand the impact of adversarial examples on malware detection, we
review practical attacks against malware classifiers that generate executable
adversarial malware examples. We also discuss current challenges in this area
of research, as well as suggestions for improvement and future research
directions.
Related papers
- MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - Explainability-Informed Targeted Malware Misclassification [0.0]
Machine learning models for malware classification into categories have shown promising results.
Deep neural networks have shown vulnerabilities against intentionally crafted adversarial attacks.
Our paper explores such adversarial vulnerabilities of neural network based malware classification system.
arXiv Detail & Related papers (2024-05-07T04:59:19Z) - A Malware Classification Survey on Adversarial Attacks and Defences [0.0]
Deep learning models are effective at detecting malware, but are vulnerable to adversarial attacks.
Attacks like this can create malicious files that are resistant to detection, creating a significant cybersecurity risk.
Recent research has seen the development of several adversarial attack and response approaches.
arXiv Detail & Related papers (2023-12-15T09:25:48Z) - Towards a Practical Defense against Adversarial Attacks on Deep
Learning-based Malware Detectors via Randomized Smoothing [3.736916304884177]
We propose a practical defense against adversarial malware examples inspired by randomized smoothing.
In our work, instead of employing Gaussian or Laplace noise when randomizing inputs, we propose a randomized ablation-based smoothing scheme.
We have empirically evaluated the proposed ablation-based model against various state-of-the-art evasion attacks on the BODMAS dataset.
arXiv Detail & Related papers (2023-08-17T10:30:25Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Adversarial Patterns: Building Robust Android Malware Classifiers [0.9208007322096533]
In the field of cybersecurity, machine learning models have made significant improvements in malware detection.
Despite their ability to understand complex patterns from unstructured data, these models are susceptible to adversarial attacks.
This paper provides a comprehensive review of adversarial machine learning in the context of Android malware classifiers.
arXiv Detail & Related papers (2022-03-04T03:47:08Z) - A Review of Adversarial Attack and Defense for Classification Methods [78.50824774203495]
This paper focuses on the generation and guarding of adversarial examples.
It is the hope of the authors that this paper will encourage more statisticians to work on this important and exciting field of generating and defending against adversarial examples.
arXiv Detail & Related papers (2021-11-18T22:13:43Z) - Binary Black-box Evasion Attacks Against Deep Learning-based Static
Malware Detectors with Adversarial Byte-Level Language Model [11.701290164823142]
MalRNN is a novel approach to automatically generate evasive malware variants without restrictions.
MalRNN effectively evades three recent deep learning-based malware detectors and outperforms current benchmark methods.
arXiv Detail & Related papers (2020-12-14T22:54:53Z) - Learning to Separate Clusters of Adversarial Representations for Robust
Adversarial Detection [50.03939695025513]
We propose a new probabilistic adversarial detector motivated by a recently introduced non-robust feature.
In this paper, we consider the non-robust features as a common property of adversarial examples, and we deduce it is possible to find a cluster in representation space corresponding to the property.
This idea leads us to probability estimate distribution of adversarial representations in a separate cluster, and leverage the distribution for a likelihood based adversarial detector.
arXiv Detail & Related papers (2020-12-07T07:21:18Z) - Being Single Has Benefits. Instance Poisoning to Deceive Malware
Classifiers [47.828297621738265]
We show how an attacker can launch a sophisticated and efficient poisoning attack targeting the dataset used to train a malware classifier.
As opposed to other poisoning attacks in the malware detection domain, our attack does not focus on malware families but rather on specific malware instances that contain an implanted trigger.
We propose a comprehensive detection approach that could serve as a future sophisticated defense against this newly discovered severe threat.
arXiv Detail & Related papers (2020-10-30T15:27:44Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.