Generative Adversarial Networks and Image-Based Malware Classification
- URL: http://arxiv.org/abs/2207.00421v1
- Date: Wed, 8 Jun 2022 20:59:47 GMT
- Title: Generative Adversarial Networks and Image-Based Malware Classification
- Authors: Huy Nguyen and Fabio Di Troia and Genya Ishigaki and Mark Stamp
- Abstract summary: We focus on Generative Adversarial Networks (GAN) for multiclass classification.
We find that the AC-GAN discriminator is generally competitive with other machine learning techniques.
We also evaluate the utility of the GAN generative model for adversarial attacks on image-based malware detection.
- Score: 7.803471587734353
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: For efficient malware removal, determination of malware threat levels, and
damage estimation, malware family classification plays a critical role. In this
paper, we extract features from malware executable files and represent them as
images using various approaches. We then focus on Generative Adversarial
Networks (GAN) for multiclass classification and compare our GAN results to
other popular machine learning techniques, including Support Vector Machine
(SVM), XGBoost, and Restricted Boltzmann Machines (RBM). We find that the
AC-GAN discriminator is generally competitive with other machine learning
techniques. We also evaluate the utility of the GAN generative model for
adversarial attacks on image-based malware detection. While AC-GAN generated
images are visually impressive, we find that they are easily distinguished from
real malware images using any of several learning techniques. This result
indicates that our GAN generated images would be of little value in adversarial
attacks.
Related papers
- MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - MITS-GAN: Safeguarding Medical Imaging from Tampering with Generative Adversarial Networks [48.686454485328895]
This study introduces MITS-GAN, a novel approach to prevent tampering in medical images.
The approach disrupts the output of the attacker's CT-GAN architecture by introducing finely tuned perturbations that are imperceptible to the human eye.
Experimental results on a CT scan demonstrate MITS-GAN's superior performance.
arXiv Detail & Related papers (2024-01-17T22:30:41Z) - High-resolution Image-based Malware Classification using Multiple
Instance Learning [0.0]
This paper proposes a novel method of classifying malware into families using high-resolution greyscale images and multiple instance learning.
The implementation is evaluated on the Microsoft Malware Classification dataset and achieves accuracies of up to $96.6%$ on adversarially enlarged samples.
arXiv Detail & Related papers (2023-11-21T18:11:26Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - From Malware Samples to Fractal Images: A New Paradigm for
Classification. (Version 2.0, Previous version paper name: Have you ever seen
malware?) [0.3670422696827526]
We propose a very unconventional and novel approach to malware visualisation based on dynamic behaviour analysis.
The idea is that the images, which are visually very interesting, are then used to classify malware concerning goodware.
The results of the presented experiments are based on a database of 6 589 997 goodware, 827 853 potentially unwanted applications and 4 174 203 malware samples.
arXiv Detail & Related papers (2022-12-05T15:15:54Z) - Traditional Classification Neural Networks are Good Generators: They are
Competitive with DDPMs and GANs [104.72108627191041]
We show that conventional neural network classifiers can generate high-quality images comparable to state-of-the-art generative models.
We propose a mask-based reconstruction module to make semantic gradients-aware to synthesize plausible images.
We show that our method is also applicable to text-to-image generation by regarding image-text foundation models.
arXiv Detail & Related papers (2022-11-27T11:25:35Z) - Self-Supervised Vision Transformers for Malware Detection [0.0]
This paper presents SHERLOCK, a self-supervision based deep learning model to detect malware based on the Vision Transformer (ViT) architecture.
Our proposed model is also able to outperform state-of-the-art techniques for multi-class malware classification of types and family with macro-F1 score of.497 and.491 respectively.
arXiv Detail & Related papers (2022-08-15T07:49:58Z) - Design of secure and robust cognitive system for malware detection [0.571097144710995]
Adversarial samples are generated by intelligently crafting and adding perturbations to the input samples.
The aim of this thesis is to address the critical system security issues.
A novel technique to detect stealthy malware is proposed.
arXiv Detail & Related papers (2022-08-03T18:52:38Z) - Real-World Adversarial Examples involving Makeup Application [58.731070632586594]
We propose a physical adversarial attack with the use of full-face makeup.
Our attack can effectively overcome manual errors in makeup application, such as color and position-related errors.
arXiv Detail & Related papers (2021-09-04T05:29:28Z) - Auxiliary-Classifier GAN for Malware Analysis [4.111899441919165]
We generate fake malware images using auxiliary classifier GANs (AC-GAN)
We consider the effectiveness of various techniques for classifying the resulting images.
While the AC-GAN generated images often appear to be very similar to real malware images, we conclude that from a deep learning perspective, the AC-GAN generated samples do not rise to the level of deep fake malware images.
arXiv Detail & Related papers (2021-07-04T13:15:03Z) - Adversarial Attacks on Binary Image Recognition Systems [78.78811131936622]
We study adversarial attacks on models for binary (i.e. black and white) image classification.
In contrast to colored and grayscale images, the search space of attacks on binary images is extremely restricted.
We introduce a new attack algorithm called SCAR, designed to fool classifiers of binary images.
arXiv Detail & Related papers (2020-10-22T14:57:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.