The Road Less Traveled: Investigating Robustness and Explainability in CNN Malware Detection
- URL: http://arxiv.org/abs/2503.01391v1
- Date: Mon, 03 Mar 2025 10:42:00 GMT
- Title: The Road Less Traveled: Investigating Robustness and Explainability in CNN Malware Detection
- Authors: Matteo Brosolo, Vinod Puthuvath, Mauro Conti,
- Abstract summary: We integrate quantitative analysis with explainability tools to better understand CNN behavior in malware classification.<n> obfuscation techniques can reduce model accuracy by up to 50%, and propose a mitigation strategy to enhance robustness.<n>This work contributes to improving the interpretability and resilience of deep learning-based intrusion detection systems.
- Score: 15.43868945929965
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning has become a key tool in cybersecurity, improving both attack strategies and defense mechanisms. Deep learning models, particularly Convolutional Neural Networks (CNNs), have demonstrated high accuracy in detecting malware images generated from binary data. However, the decision-making process of these black-box models remains difficult to interpret. This study addresses this challenge by integrating quantitative analysis with explainability tools such as Occlusion Maps, HiResCAM, and SHAP to better understand CNN behavior in malware classification. We further demonstrate that obfuscation techniques can reduce model accuracy by up to 50%, and propose a mitigation strategy to enhance robustness. Additionally, we analyze heatmaps from multiple tests and outline a methodology for identification of artifacts, aiding researchers in conducting detailed manual investigations. This work contributes to improving the interpretability and resilience of deep learning-based intrusion detection systems
Related papers
- Optimized Approaches to Malware Detection: A Study of Machine Learning and Deep Learning Techniques [0.0]
Digital systems find it challenging to keep up with cybersecurity threats.
The daily emergence of more than 560,000 new malware strains poses significant hazards to the digital ecosystem.
This study explores the ways in which malware can be detected using machine learning (ML) and deep learning (DL) approaches to address those shortcomings.
arXiv Detail & Related papers (2025-04-24T20:40:51Z) - Adversarial Challenges in Network Intrusion Detection Systems: Research Insights and Future Prospects [0.33554367023486936]
This paper provides a comprehensive review of machine learning-based Network Intrusion Detection Systems (NIDS)
We critically examine existing research in NIDS, highlighting key trends, strengths, and limitations.
We discuss emerging challenges in the field and offer insights for the development of more robust and resilient NIDS.
arXiv Detail & Related papers (2024-09-27T13:27:29Z) - Characterizing out-of-distribution generalization of neural networks: application to the disordered Su-Schrieffer-Heeger model [38.79241114146971]
We show how interpretability methods can increase trust in predictions of a neural network trained to classify quantum phases.
In particular, we show that we can ensure better out-of-distribution generalization in the complex classification problem.
This work is an example of how the systematic use of interpretability methods can improve the performance of NNs in scientific problems.
arXiv Detail & Related papers (2024-06-14T13:24:32Z) - Comprehensive evaluation of Mal-API-2019 dataset by machine learning in malware detection [0.5475886285082937]
This study conducts a thorough examination of malware detection using machine learning techniques.
The aim is to advance cybersecurity capabilities by identifying and mitigating threats more effectively.
arXiv Detail & Related papers (2024-03-04T17:22:43Z) - Instance Attack:An Explanation-based Vulnerability Analysis Framework
Against DNNs for Malware Detection [0.0]
We propose the notion of the instance-based attack.
Our scheme is interpretable and can work in a black-box environment.
Our method operates in black-box settings and the results can be validated with domain knowledge.
arXiv Detail & Related papers (2022-09-06T12:41:20Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Intelligent Masking: Deep Q-Learning for Context Encoding in Medical
Image Analysis [48.02011627390706]
We develop a novel self-supervised approach that occludes targeted regions to improve the pre-training procedure.
We show that training the agent against the prediction model can significantly improve the semantic features extracted for downstream classification tasks.
arXiv Detail & Related papers (2022-03-25T19:05:06Z) - A Comparative Analysis of Machine Learning Techniques for IoT Intrusion
Detection [0.0]
This paper presents a comparative analysis of supervised, unsupervised and reinforcement learning techniques on nine malware captures of the IoT-23 dataset.
The developed models consisted of Support Vector Machine (SVM), Extreme Gradient Boosting (XGBoost), Light Gradient Boosting Machine (LightGBM), Isolation Forest (iForest), Local Outlier Factor (LOF) and a Deep Reinforcement Learning (DRL) model based on a Double Deep Q-Network (DDQN)
arXiv Detail & Related papers (2021-11-25T16:14:54Z) - A comparative study of neural network techniques for automatic software
vulnerability detection [9.443081849443184]
Most commonly used method for detecting software vulnerabilities is static analysis.
Some researchers have proposed to use neural networks that have the ability of automatic feature extraction to improve intelligence of detection.
We have conducted extensive experiments to test the performance of the two most typical neural networks.
arXiv Detail & Related papers (2021-04-29T01:47:30Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z) - Binary Neural Networks: A Survey [126.67799882857656]
The binary neural network serves as a promising technique for deploying deep models on resource-limited devices.
The binarization inevitably causes severe information loss, and even worse, its discontinuity brings difficulty to the optimization of the deep network.
We present a survey of these algorithms, mainly categorized into the native solutions directly conducting binarization, and the optimized ones using techniques like minimizing the quantization error, improving the network loss function, and reducing the gradient error.
arXiv Detail & Related papers (2020-03-31T16:47:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.