Learning Explainable Representations of Malware Behavior
- URL: http://arxiv.org/abs/2106.12328v1
- Date: Wed, 23 Jun 2021 11:50:57 GMT
- Title: Learning Explainable Representations of Malware Behavior
- Authors: Paul Prasse, Jan Brabec, Jan Kohout, Martin Kopp, Lukas Bajer, Tobias
Scheffer
- Abstract summary: We develop a neural network that processes network-flow data into comprehensible emphnetwork events
We then use the emphintegrated-gradients method to highlight events that jointly constitute the characteristic behavioral pattern of the threat.
We demonstrate how this system detects njRAT and other malware based on behavioral patterns.
- Score: 3.718942345103135
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We address the problems of identifying malware in network telemetry logs and
providing \emph{indicators of compromise} -- comprehensible explanations of
behavioral patterns that identify the threat. In our system, an array of
specialized detectors abstracts network-flow data into comprehensible
\emph{network events} in a first step. We develop a neural network that
processes this sequence of events and identifies specific threats, malware
families and broad categories of malware. We then use the
\emph{integrated-gradients} method to highlight events that jointly constitute
the characteristic behavioral pattern of the threat. We compare network
architectures based on CNNs, LSTMs, and transformers, and explore the efficacy
of unsupervised pre-training experimentally on large-scale telemetry data. We
demonstrate how this system detects njRAT and other malware based on behavioral
patterns.
Related papers
- MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - Explainable Malware Detection with Tailored Logic Explained Networks [9.506820624395447]
Malware detection is a constant challenge in cybersecurity due to the rapid development of new attack techniques.
Traditional signature-based approaches struggle to keep pace with the sheer volume of malware samples.
Machine learning offers a promising solution, but faces issues of generalization to unseen samples and a lack of explanation for the instances identified as malware.
arXiv Detail & Related papers (2024-05-05T17:36:02Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Searching for the Essence of Adversarial Perturbations [73.96215665913797]
We show that adversarial perturbations contain human-recognizable information, which is the key conspirator responsible for a neural network's erroneous prediction.
This concept of human-recognizable information allows us to explain key features related to adversarial perturbations.
arXiv Detail & Related papers (2022-05-30T18:04:57Z) - Self-Supervised and Interpretable Anomaly Detection using Network
Transformers [1.0705399532413615]
This paper introduces the Network Transformer (NeT) model for anomaly detection.
NeT incorporates the graph structure of the communication network in order to improve interpretability.
The presented approach was tested by evaluating the successful detection of anomalies in an Industrial Control System.
arXiv Detail & Related papers (2022-02-25T22:05:59Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Explaining Network Intrusion Detection System Using Explainable AI
Framework [0.5076419064097734]
Intrusion detection system is one of the important layers in cyber safety in today's world.
In this paper, we have used deep neural network for network intrusion detection.
We also proposed explainable AI framework to add transparency at every stage of machine learning pipeline.
arXiv Detail & Related papers (2021-03-12T07:15:09Z) - NF-GNN: Network Flow Graph Neural Networks for Malware Detection and
Classification [11.624780336645006]
Malicious software (malware) poses an increasing threat to the security of communication systems.
We present three variants of our base model, which all support malware detection and classification in supervised and unsupervised settings.
Experiments on four different prediction tasks consistently demonstrate the advantages of our approach and show that our graph neural network model can boost detection performance by a significant margin.
arXiv Detail & Related papers (2021-03-05T20:54:38Z) - Experimental Review of Neural-based approaches for Network Intrusion
Management [8.727349339883094]
We provide an experimental-based review of neural-based methods applied to intrusion detection issues.
We offer a complete view of the most prominent neural-based techniques relevant to intrusion detection, including deep-based approaches or weightless neural networks.
Our evaluation quantifies the value of neural networks, particularly when state-of-the-art datasets are used to train the models.
arXiv Detail & Related papers (2020-09-18T18:32:24Z) - Cassandra: Detecting Trojaned Networks from Adversarial Perturbations [92.43879594465422]
In many cases, pre-trained models are sourced from vendors who may have disrupted the training pipeline to insert Trojan behaviors into the models.
We propose a method to verify if a pre-trained model is Trojaned or benign.
Our method captures fingerprints of neural networks in the form of adversarial perturbations learned from the network gradients.
arXiv Detail & Related papers (2020-07-28T19:00:40Z) - Analyzing the Noise Robustness of Deep Neural Networks [43.63911131982369]
Adversarial examples, generated by adding small but intentionally imperceptible perturbations to normal examples, can mislead deep neural networks (DNNs) to make incorrect predictions.
We present a visual analysis method to explain why adversarial examples are misclassified.
arXiv Detail & Related papers (2020-01-26T03:39:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.