Deep Image: A precious image based deep learning method for online
malware detection in IoT Environment
- URL: http://arxiv.org/abs/2204.01690v1
- Date: Mon, 4 Apr 2022 17:56:55 GMT
- Title: Deep Image: A precious image based deep learning method for online
malware detection in IoT Environment
- Authors: Meysam Ghahramani, Rahim Taheri, Mohammad Shojafar, Reza Javidan,
Shaohua Wan
- Abstract summary: In this paper, a different view of malware analysis is considered and the risk level of each sample feature is computed.
In addition to the usual machine learning criteria namely accuracy and FPR, a proposed criterion based on the risk of samples has also been used for comparison.
The results show that the deep learning approach performed better in detecting malware.
- Score: 12.558284943901613
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The volume of malware and the number of attacks in IoT devices are rising
everyday, which encourages security professionals to continually enhance their
malware analysis tools. Researchers in the field of cyber security have
extensively explored the usage of sophisticated analytics and the efficiency of
malware detection. With the introduction of new malware kinds and attack
routes, security experts confront considerable challenges in developing
efficient malware detection and analysis solutions. In this paper, a different
view of malware analysis is considered and the risk level of each sample
feature is computed, and based on that the risk level of that sample is
calculated. In this way, a criterion is introduced that is used together with
accuracy and FPR criteria for malware analysis in IoT environment. In this
paper, three malware detection methods based on visualization techniques called
the clustering approach, the probabilistic approach, and the deep learning
approach are proposed. Then, in addition to the usual machine learning criteria
namely accuracy and FPR, a proposed criterion based on the risk of samples has
also been used for comparison, with the results showing that the deep learning
approach performed better in detecting malware
Related papers
- A Novel Approach to Malicious Code Detection Using CNN-BiLSTM and Feature Fusion [2.3039261241391586]
This study employs the minhash algorithm to convert binary files of malware into grayscale images.
The study utilizes IDA Pro to decompile and extract opcode sequences, applying N-gram and tf-idf algorithms for feature vectorization.
A CNN-BiLSTM fusion model is designed to simultaneously process image features and opcode sequences, enhancing classification performance.
arXiv Detail & Related papers (2024-10-12T07:10:44Z) - Underwater Object Detection in the Era of Artificial Intelligence: Current, Challenge, and Future [119.88454942558485]
Underwater object detection (UOD) aims to identify and localise objects in underwater images or videos.
In recent years, artificial intelligence (AI) based methods, especially deep learning methods, have shown promising performance in UOD.
arXiv Detail & Related papers (2024-10-08T00:25:33Z) - Explainable Malware Analysis: Concepts, Approaches and Challenges [0.0]
We review the current state-of-the-art ML-based malware detection techniques and popular XAI approaches.
We discuss research implementations and the challenges of explainable malware analysis.
This theoretical survey serves as an entry point for researchers interested in XAI applications in malware detection.
arXiv Detail & Related papers (2024-09-09T08:19:33Z) - Comprehensive evaluation of Mal-API-2019 dataset by machine learning in malware detection [0.5475886285082937]
This study conducts a thorough examination of malware detection using machine learning techniques.
The aim is to advance cybersecurity capabilities by identifying and mitigating threats more effectively.
arXiv Detail & Related papers (2024-03-04T17:22:43Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - Towards a Fair Comparison and Realistic Design and Evaluation Framework
of Android Malware Detectors [63.75363908696257]
We analyze 10 influential research works on Android malware detection using a common evaluation framework.
We identify five factors that, if not taken into account when creating datasets and designing detectors, significantly affect the trained ML models.
We conclude that the studied ML-based detectors have been evaluated optimistically, which justifies the good published results.
arXiv Detail & Related papers (2022-05-25T08:28:08Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - ML-based IoT Malware Detection Under Adversarial Settings: A Systematic
Evaluation [9.143713488498513]
This work systematically examines the state-of-the-art malware detection approaches, that utilize various representation and learning techniques.
We show that software mutations with functionality-preserving operations, such as stripping and padding, significantly deteriorate the accuracy of such detectors.
arXiv Detail & Related papers (2021-08-30T16:54:07Z) - Towards an Automated Pipeline for Detecting and Classifying Malware
through Machine Learning [0.0]
We propose a malware taxonomic classification pipeline able to classify Windows Portable Executable files (PEs)
Given an input PE sample, it is first classified as either malicious or benign.
If malicious, the pipeline further analyzes it in order to establish its threat type, family, and behavior(s)
arXiv Detail & Related papers (2021-06-10T10:07:50Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Evaluating the Safety of Deep Reinforcement Learning Models using
Semi-Formal Verification [81.32981236437395]
We present a semi-formal verification approach for decision-making tasks based on interval analysis.
Our method obtains comparable results over standard benchmarks with respect to formal verifiers.
Our approach allows to efficiently evaluate safety properties for decision-making models in practical applications.
arXiv Detail & Related papers (2020-10-19T11:18:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.