EvilModel: Hiding Malware Inside of Neural Network Models
- URL: http://arxiv.org/abs/2107.08590v1
- Date: Mon, 19 Jul 2021 02:44:31 GMT
- Title: EvilModel: Hiding Malware Inside of Neural Network Models
- Authors: Zhi Wang, Chaoge Liu, Xiang Cui
- Abstract summary: We present a method that delivers malware covertly and detection-evadingly through neural network models.
Experiments show that 36.9MB of malware can be embedded into a 178MB-AlexNet model within 1% accuracy loss.
We hope this work could provide a referenceable scenario for the defense on neural network-assisted attacks.
- Score: 3.9303867698406707
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Delivering malware covertly and detection-evadingly is critical to advanced
malware campaigns. In this paper, we present a method that delivers malware
covertly and detection-evadingly through neural network models. Neural network
models are poorly explainable and have a good generalization ability. By
embedding malware into the neurons, malware can be delivered covertly with
minor or even no impact on the performance of neural networks. Meanwhile, since
the structure of the neural network models remains unchanged, they can pass the
security scan of antivirus engines. Experiments show that 36.9MB of malware can
be embedded into a 178MB-AlexNet model within 1% accuracy loss, and no
suspicious are raised by antivirus engines in VirusTotal, which verifies the
feasibility of this method. With the widespread application of artificial
intelligence, utilizing neural networks becomes a forwarding trend of malware.
We hope this work could provide a referenceable scenario for the defense on
neural network-assisted attacks.
Related papers
- New Approach to Malware Detection Using Optimized Convolutional Neural
Network [0.0]
This paper proposes a new convolutional deep learning neural network to accurately and effectively detect malware with high precision.
The baseline model initially achieves 98% accurate rate but after increasing the depth of the CNN model, its accuracy reaches 99.183.
To further solidify the effectiveness of this CNN model, we use the improved model to make predictions on new malware samples within our dataset.
arXiv Detail & Related papers (2023-01-26T15:06:47Z) - NCTV: Neural Clamping Toolkit and Visualization for Neural Network
Calibration [66.22668336495175]
A lack of consideration for neural network calibration will not gain trust from humans.
We introduce the Neural Clamping Toolkit, the first open-source framework designed to help developers employ state-of-the-art model-agnostic calibrated models.
arXiv Detail & Related papers (2022-11-29T15:03:05Z) - Searching for the Essence of Adversarial Perturbations [73.96215665913797]
We show that adversarial perturbations contain human-recognizable information, which is the key conspirator responsible for a neural network's erroneous prediction.
This concept of human-recognizable information allows us to explain key features related to adversarial perturbations.
arXiv Detail & Related papers (2022-05-30T18:04:57Z) - Few-shot Backdoor Defense Using Shapley Estimation [123.56934991060788]
We develop a new approach called Shapley Pruning to mitigate backdoor attacks on deep neural networks.
ShapPruning identifies the few infected neurons (under 1% of all neurons) and manages to protect the model's structure and accuracy.
Experiments demonstrate the effectiveness and robustness of our method against various attacks and tasks.
arXiv Detail & Related papers (2021-12-30T02:27:03Z) - EvilModel 2.0: Hiding Malware Inside of Neural Network Models [7.060465882091837]
Turning neural network models into stegomalware is a malicious use of AI.
Existing methods have a low malware embedding rate and a high impact on the model performance.
This paper proposes new methods to embed malware in models with high capacity and no service quality degradation.
arXiv Detail & Related papers (2021-09-09T15:31:33Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Binary Black-box Evasion Attacks Against Deep Learning-based Static
Malware Detectors with Adversarial Byte-Level Language Model [11.701290164823142]
MalRNN is a novel approach to automatically generate evasive malware variants without restrictions.
MalRNN effectively evades three recent deep learning-based malware detectors and outperforms current benchmark methods.
arXiv Detail & Related papers (2020-12-14T22:54:53Z) - Classifying Malware Images with Convolutional Neural Network Models [2.363388546004777]
In this paper, we use several convolutional neural network (CNN) models for static malware classification.
The Inception V3 model achieves a test accuracy of 99.24%, which is better than the accuracy of 98.52% achieved by the current state-of-the-art system.
arXiv Detail & Related papers (2020-10-30T07:39:30Z) - Data Augmentation Based Malware Detection using Convolutional Neural
Networks [0.0]
Cyber-attacks have been extensively seen due to the increase of malware in the cyber world.
The most important feature of this type of malware is that they change shape as they propagate from one computer to another.
This paper aims at providing an image augmentation enhanced deep convolutional neural network models for the detection of malware families in a metamorphic malware environment.
arXiv Detail & Related papers (2020-10-05T08:58:07Z) - Noise-Response Analysis of Deep Neural Networks Quantifies Robustness
and Fingerprints Structural Malware [48.7072217216104]
Deep neural networks (DNNs) have structural malware' (i.e., compromised weights and activation pathways)
It is generally difficult to detect backdoors, and existing detection methods are computationally expensive and require extensive resources (e.g., access to the training data)
Here, we propose a rapid feature-generation technique that quantifies the robustness of a DNN, fingerprints' its nonlinearity, and allows us to detect backdoors (if present)
Our empirical results demonstrate that we can accurately detect backdoors with high confidence orders-of-magnitude faster than existing approaches (seconds versus
arXiv Detail & Related papers (2020-07-31T23:52:58Z) - Scalable Backdoor Detection in Neural Networks [61.39635364047679]
Deep learning models are vulnerable to Trojan attacks, where an attacker can install a backdoor during training time to make the resultant model misidentify samples contaminated with a small trigger patch.
We propose a novel trigger reverse-engineering based approach whose computational complexity does not scale with the number of labels, and is based on a measure that is both interpretable and universal across different network and patch types.
In experiments, we observe that our method achieves a perfect score in separating Trojaned models from pure models, which is an improvement over the current state-of-the art method.
arXiv Detail & Related papers (2020-06-10T04:12:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.