Explainable Malware Detection with Tailored Logic Explained Networks
- URL: http://arxiv.org/abs/2405.03009v1
- Date: Sun, 5 May 2024 17:36:02 GMT
- Title: Explainable Malware Detection with Tailored Logic Explained Networks
- Authors: Peter Anthony, Francesco Giannini, Michelangelo Diligenti, Martin Homola, Marco Gori, Stefan Balogh, Jan Mojzis,
- Abstract summary: Malware detection is a constant challenge in cybersecurity due to the rapid development of new attack techniques.
Traditional signature-based approaches struggle to keep pace with the sheer volume of malware samples.
Machine learning offers a promising solution, but faces issues of generalization to unseen samples and a lack of explanation for the instances identified as malware.
- Score: 9.506820624395447
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Malware detection is a constant challenge in cybersecurity due to the rapid development of new attack techniques. Traditional signature-based approaches struggle to keep pace with the sheer volume of malware samples. Machine learning offers a promising solution, but faces issues of generalization to unseen samples and a lack of explanation for the instances identified as malware. However, human-understandable explanations are especially important in security-critical fields, where understanding model decisions is crucial for trust and legal compliance. While deep learning models excel at malware detection, their black-box nature hinders explainability. Conversely, interpretable models often fall short in performance. To bridge this gap in this application domain, we propose the use of Logic Explained Networks (LENs), which are a recently proposed class of interpretable neural networks providing explanations in the form of First-Order Logic (FOL) rules. This paper extends the application of LENs to the complex domain of malware detection, specifically using the large-scale EMBER dataset. In the experimental results we show that LENs achieve robustness that exceeds traditional interpretable methods and that are rivaling black-box models. Moreover, we introduce a tailored version of LENs that is shown to generate logic explanations with higher fidelity with respect to the model's predictions.
Related papers
- MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - SoK: Realistic Adversarial Attacks and Defenses for Intelligent Network
Intrusion Detection [0.0]
This paper consolidates and summarizes the state-of-the-art adversarial learning approaches that can generate realistic examples.
It defines the fundamental properties that are required for an adversarial example to be realistic.
It provides guidelines for researchers to ensure that their future experiments are adequate for a real communication network.
arXiv Detail & Related papers (2023-08-13T17:23:36Z) - Logically Consistent Adversarial Attacks for Soft Theorem Provers [110.17147570572939]
We propose a generative adversarial framework for probing and improving language models' reasoning capabilities.
Our framework successfully generates adversarial attacks and identifies global weaknesses.
In addition to effective probing, we show that training on the generated samples improves the target model's performance.
arXiv Detail & Related papers (2022-04-29T19:10:12Z) - Learning Explainable Representations of Malware Behavior [3.718942345103135]
We develop a neural network that processes network-flow data into comprehensible emphnetwork events
We then use the emphintegrated-gradients method to highlight events that jointly constitute the characteristic behavioral pattern of the threat.
We demonstrate how this system detects njRAT and other malware based on behavioral patterns.
arXiv Detail & Related papers (2021-06-23T11:50:57Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Explaining Network Intrusion Detection System Using Explainable AI
Framework [0.5076419064097734]
Intrusion detection system is one of the important layers in cyber safety in today's world.
In this paper, we have used deep neural network for network intrusion detection.
We also proposed explainable AI framework to add transparency at every stage of machine learning pipeline.
arXiv Detail & Related papers (2021-03-12T07:15:09Z) - Towards interpreting ML-based automated malware detection models: a
survey [4.721069729610892]
Most of the existing machine learning models are black-box, which made their pre-diction results undependable.
This paper aims to examine and categorize the existing researches on ML-based malware detector interpretability.
arXiv Detail & Related papers (2021-01-15T17:34:40Z) - Proactive Pseudo-Intervention: Causally Informed Contrastive Learning
For Interpretable Vision Models [103.64435911083432]
We present a novel contrastive learning strategy called it Proactive Pseudo-Intervention (PPI)
PPI leverages proactive interventions to guard against image features with no causal relevance.
We also devise a novel causally informed salience mapping module to identify key image pixels to intervene, and show it greatly facilitates model interpretability.
arXiv Detail & Related papers (2020-12-06T20:30:26Z) - Interpreting Machine Learning Malware Detectors Which Leverage N-gram
Analysis [2.6397379133308214]
cybersecurity analysts always prefer solutions that are as interpretable and understandable as rule-based or signature-based detection.
The objective of this paper is to evaluate the current state-of-the-art ML models interpretability techniques when applied to ML-based malware detectors.
arXiv Detail & Related papers (2020-01-27T19:10:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.