EvilModel 2.0: Hiding Malware Inside of Neural Network Models
- URL: http://arxiv.org/abs/2109.04344v1
- Date: Thu, 9 Sep 2021 15:31:33 GMT
- Title: EvilModel 2.0: Hiding Malware Inside of Neural Network Models
- Authors: Zhi Wang, Chaoge Liu, Xiang Cui, Jie Yin
- Abstract summary: Turning neural network models into stegomalware is a malicious use of AI.
Existing methods have a low malware embedding rate and a high impact on the model performance.
This paper proposes new methods to embed malware in models with high capacity and no service quality degradation.
- Score: 7.060465882091837
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While artificial intelligence (AI) is widely applied in various areas, it is
also being used maliciously. It is necessary to study and predict AI-powered
attacks to prevent them in advance. Turning neural network models into
stegomalware is a malicious use of AI, which utilizes the features of neural
network models to hide malware while maintaining the performance of the models.
However, the existing methods have a low malware embedding rate and a high
impact on the model performance, making it not practical. Therefore, by
analyzing the composition of the neural network models, this paper proposes new
methods to embed malware in models with high capacity and no service quality
degradation. We used 19 malware samples and 10 mainstream models to build 550
malware-embedded models and analyzed the models' performance on ImageNet
dataset. A new evaluation method that combines the embedding rate, the model
performance impact and the embedding effort is proposed to evaluate the
existing methods. This paper also designs a trigger and proposes an application
scenario in attack tasks combining EvilModel with WannaCry. This paper further
studies the relationship between neural network models' embedding capacity and
the model structure, layer and size. With the widespread application of
artificial intelligence, utilizing neural networks for attacks is becoming a
forwarding trend. We hope this work can provide a reference scenario for the
defense of neural network-assisted attacks.
Related papers
- Do You Trust Your Model? Emerging Malware Threats in the Deep Learning
Ecosystem [37.650342256199096]
We introduce MaleficNet 2.0, a technique to embed self-extracting, self-executing malware in neural networks.
MaleficNet 2.0 injection technique is stealthy, does not degrade the performance of the model, and is robust against removal techniques.
We implement a proof-of-concept self-extracting neural network malware using MaleficNet 2.0, demonstrating the practicality of the attack against a widely adopted machine learning framework.
arXiv Detail & Related papers (2024-03-06T10:27:08Z) - New Approach to Malware Detection Using Optimized Convolutional Neural
Network [0.0]
This paper proposes a new convolutional deep learning neural network to accurately and effectively detect malware with high precision.
The baseline model initially achieves 98% accurate rate but after increasing the depth of the CNN model, its accuracy reaches 99.183.
To further solidify the effectiveness of this CNN model, we use the improved model to make predictions on new malware samples within our dataset.
arXiv Detail & Related papers (2023-01-26T15:06:47Z) - NCTV: Neural Clamping Toolkit and Visualization for Neural Network
Calibration [66.22668336495175]
A lack of consideration for neural network calibration will not gain trust from humans.
We introduce the Neural Clamping Toolkit, the first open-source framework designed to help developers employ state-of-the-art model-agnostic calibrated models.
arXiv Detail & Related papers (2022-11-29T15:03:05Z) - An Adversarial Active Sampling-based Data Augmentation Framework for
Manufacturable Chip Design [55.62660894625669]
Lithography modeling is a crucial problem in chip design to ensure a chip design mask is manufacturable.
Recent developments in machine learning have provided alternative solutions in replacing the time-consuming lithography simulations with deep neural networks.
We propose a litho-aware data augmentation framework to resolve the dilemma of limited data and improve the machine learning model performance.
arXiv Detail & Related papers (2022-10-27T20:53:39Z) - Adversarial Robustness Assessment of NeuroEvolution Approaches [1.237556184089774]
We evaluate the robustness of models found by two NeuroEvolution approaches on the CIFAR-10 image classification task.
Our results show that when the evolved models are attacked with iterative methods, their accuracy usually drops to, or close to, zero.
Some of these techniques can exacerbate the perturbations added to the original inputs, potentially harming robustness.
arXiv Detail & Related papers (2022-07-12T10:40:19Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - EvilModel: Hiding Malware Inside of Neural Network Models [3.9303867698406707]
We present a method that delivers malware covertly and detection-evadingly through neural network models.
Experiments show that 36.9MB of malware can be embedded into a 178MB-AlexNet model within 1% accuracy loss.
We hope this work could provide a referenceable scenario for the defense on neural network-assisted attacks.
arXiv Detail & Related papers (2021-07-19T02:44:31Z) - Sparse Flows: Pruning Continuous-depth Models [107.98191032466544]
We show that pruning improves generalization for neural ODEs in generative modeling.
We also show that pruning finds minimal and efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy.
arXiv Detail & Related papers (2021-06-24T01:40:17Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z) - Learning Queuing Networks by Recurrent Neural Networks [0.0]
We propose a machine-learning approach to derive performance models from data.
We exploit a deterministic approximation of their average dynamics in terms of a compact system of ordinary differential equations.
This allows for an interpretable structure of the neural network, which can be trained from system measurements to yield a white-box parameterized model.
arXiv Detail & Related papers (2020-02-25T10:56:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.